source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessorfast
|
.md
|
overridden by the `do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`):
Size of the image's `(height, width)` dimensions after resizing. Can be overridden by the `size` parameter
in the `preprocess` method. Available options are:
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`.
Do NOT keep the aspect ratio.
|
149_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessorfast
|
.md
|
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`.
Do NOT keep the aspect ratio.
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
the aspect ratio and keeping the shortest edge less or equal to `shortest_edge` and the longest edge
less or equal to `longest_edge`.
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
|
149_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessorfast
|
.md
|
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
|
149_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessorfast
|
.md
|
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize (`bool`, *optional*, defaults to `True`):
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
|
149_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessorfast
|
.md
|
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
`preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
|
149_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessorfast
|
.md
|
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_annotations (`bool`, *optional*, defaults to `True`):
Controls whether to convert the annotations to the format expected by the DEFORMABLE_DETR model. Converts the
|
149_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessorfast
|
.md
|
Controls whether to convert the annotations to the format expected by the DEFORMABLE_DETR model. Converts the
bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`.
Can be overridden by the `do_convert_annotations` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
|
149_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessorfast
|
.md
|
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
method. If `True`, padding will be applied to the bottom and right of the image with zeros.
If `pad_size` is provided, the image will be padded to the specified dimensions.
Otherwise, the image will be padded to the maximum height and width of the batch.
pad_size (`Dict[str, int]`, *optional*):
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
|
149_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessorfast
|
.md
|
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest
height and width in the batch.
Methods: preprocess
- post_process_object_detection
|
149_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrfeatureextractor
|
.md
|
No docstring available for DeformableDetrFeatureExtractor
Methods: __call__
- post_process_object_detection
|
149_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
This is the configuration class to store the configuration of a [`DeformableDetrModel`]. It is used to instantiate
a Deformable DETR model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Deformable DETR
[SenseTime/deformable-detr](https://huggingface.co/SenseTime/deformable-detr) architecture.
|
149_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
[SenseTime/deformable-detr](https://huggingface.co/SenseTime/deformable-detr) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
use_timm_backbone (`bool`, *optional*, defaults to `True`):
Whether or not to use the `timm` library for the backbone. If set to `False`, will use the [`AutoBackbone`]
API.
backbone_config (`PretrainedConfig` or `dict`, *optional*):
|
149_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
API.
backbone_config (`PretrainedConfig` or `dict`, *optional*):
The configuration of the backbone model. Only used in case `use_timm_backbone` is set to `False` in which
case it will default to `ResNetConfig()`.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
num_queries (`int`, *optional*, defaults to 300):
Number of object queries, i.e. detection slots. This is the maximal number of objects
|
149_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
Number of object queries, i.e. detection slots. This is the maximal number of objects
[`DeformableDetrModel`] can detect in a single image. In case `two_stage` is set to `True`, we use
`two_stage_num_proposals` instead.
d_model (`int`, *optional*, defaults to 256):
Dimension of the layers.
encoder_layers (`int`, *optional*, defaults to 6):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 6):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 8):
|
149_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 1024):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 1024):
|
149_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
encoder_ffn_dim (`int`, *optional*, defaults to 1024):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"relu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
149_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
149_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (`float`, *optional*, defaults to 1):
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
auxiliary_loss (`bool`, *optional*, defaults to `False`):
|
149_7_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
for more details.
auxiliary_loss (`bool`, *optional*, defaults to `False`):
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
position_embedding_type (`str`, *optional*, defaults to `"sine"`):
Type of position embeddings to be used on top of the image features. One of `"sine"` or `"learned"`.
backbone (`str`, *optional*, defaults to `"resnet50"`):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
|
149_7_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, defaults to `True`):
Whether to use pretrained weights for the backbone.
backbone_kwargs (`dict`, *optional*):
|
149_7_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
Whether to use pretrained weights for the backbone.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
dilation (`bool`, *optional*, defaults to `False`):
Whether to replace stride with dilation in the last convolutional block (DC5). Only supported when
`use_timm_backbone` = `True`.
class_cost (`float`, *optional*, defaults to 1):
|
149_7_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
`use_timm_backbone` = `True`.
class_cost (`float`, *optional*, defaults to 1):
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (`float`, *optional*, defaults to 5):
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (`float`, *optional*, defaults to 2):
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
mask_loss_coefficient (`float`, *optional*, defaults to 1):
|
149_7_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
mask_loss_coefficient (`float`, *optional*, defaults to 1):
Relative weight of the Focal loss in the panoptic segmentation loss.
dice_loss_coefficient (`float`, *optional*, defaults to 1):
Relative weight of the DICE/F-1 loss in the panoptic segmentation loss.
bbox_loss_coefficient (`float`, *optional*, defaults to 5):
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (`float`, *optional*, defaults to 2):
|
149_7_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
giou_loss_coefficient (`float`, *optional*, defaults to 2):
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (`float`, *optional*, defaults to 0.1):
Relative classification weight of the 'no-object' class in the object detection loss.
num_feature_levels (`int`, *optional*, defaults to 4):
The number of input feature levels.
encoder_n_points (`int`, *optional*, defaults to 4):
The number of sampled keys in each feature level for each attention head in the encoder.
|
149_7_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
The number of sampled keys in each feature level for each attention head in the encoder.
decoder_n_points (`int`, *optional*, defaults to 4):
The number of sampled keys in each feature level for each attention head in the decoder.
two_stage (`bool`, *optional*, defaults to `False`):
Whether to apply a two-stage deformable DETR, where the region proposals are also generated by a variant of
Deformable DETR, which are further fed into the decoder for iterative bounding box refinement.
|
149_7_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
Deformable DETR, which are further fed into the decoder for iterative bounding box refinement.
two_stage_num_proposals (`int`, *optional*, defaults to 300):
The number of region proposals to be generated, in case `two_stage` is set to `True`.
with_box_refine (`bool`, *optional*, defaults to `False`):
Whether to apply iterative bounding box refinement, where each decoder layer refines the bounding boxes
based on the predictions from the previous layer.
focal_alpha (`float`, *optional*, defaults to 0.25):
|
149_7_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
based on the predictions from the previous layer.
focal_alpha (`float`, *optional*, defaults to 0.25):
Alpha parameter in the focal loss.
disable_custom_kernels (`bool`, *optional*, defaults to `False`):
Disable the use of custom CUDA and CPU kernels. This option is necessary for the ONNX export, as custom
kernels are not supported by PyTorch ONNX export.
Examples:
```python
>>> from transformers import DeformableDetrConfig, DeformableDetrModel
|
149_7_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrconfig
|
.md
|
>>> # Initializing a Deformable DETR SenseTime/deformable-detr style configuration
>>> configuration = DeformableDetrConfig()
>>> # Initializing a model (with random weights) from the SenseTime/deformable-detr style configuration
>>> model = DeformableDetrModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
149_7_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrmodel
|
.md
|
The bare Deformable DETR Model (consisting of a backbone and encoder-decoder Transformer) outputting raw
hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
149_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`DeformableDetrConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
149_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrmodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
149_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrforobjectdetection
|
.md
|
Deformable DETR Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on
top, for tasks such as COCO detection.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
149_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrforobjectdetection
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`DeformableDetrConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
149_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrforobjectdetection
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
149_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
150_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
150_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#overview
|
.md
|
The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image.
|
150_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#overview
|
.md
|
**Model Architecture:** Llama 3.2-Vision is built on top of Llama 3.1 text-only model, which is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. To support image recognition tasks, the Llama 3.2-Vision model uses a separately trained vision adapter that integrates with the pre-trained Llama 3.1 language model.
|
150_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#overview
|
.md
|
Llama 3.2-Vision model uses a separately trained vision adapter that integrates with the pre-trained Llama 3.1 language model. The adapter consists of a series of cross-attention layers that feed image encoder representations into the core LLM.
|
150_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#usage-tips
|
.md
|
- For image+text and text inputs use `MllamaForConditionalGeneration`.
- For text-only inputs use `MllamaForCausalLM` for generation to avoid loading vision tower.
- Each sample can contain multiple images, and the number of images can vary between samples. The processor will pad the inputs to the maximum number of images across samples and to a maximum number of tiles within each image.
- The text passed to the processor should have the `"<|image|>"` tokens where the images should be inserted.
|
150_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#usage-tips
|
.md
|
- The text passed to the processor should have the `"<|image|>"` tokens where the images should be inserted.
- The processor has its own `apply_chat_template` method to convert chat messages to text that can then be passed as text to the processor.
<Tip warning={true}>
|
150_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#usage-tips
|
.md
|
Mllama has an extra token used as a placeholder for image positions in the text. It means that input ids and an input embedding layer will have an extra token. But since the weights for input and output embeddings are not tied, the `lm_head` layer has one less token and will fail if you want to calculate loss on image tokens or apply some logit processors. In case you are training, make sure to mask out special `"<|image|>"` tokens in the `labels` as the model should not be trained on predicting them.
|
150_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#usage-tips
|
.md
|
Otherwise if you see CUDA-side index erros when generating, use the below code to expand the `lm_head` by one more token.
```python
old_embeddings = model.get_output_embeddings()
|
150_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#usage-tips
|
.md
|
num_tokens = model.vocab_size + 1
resized_embeddings = model._get_resized_lm_head(old_embeddings, new_num_tokens=num_tokens, mean_resizing=True)
resized_embeddings.requires_grad_(old_embeddings.weight.requires_grad)
model.set_output_embeddings(resized_embeddings)
```
</Tip>
|
150_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#instruct-model
|
.md
|
```python
import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
model = MllamaForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16)
processor = AutoProcessor.from_pretrained(model_id)
|
150_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#instruct-model
|
.md
|
messages = [
[
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What does the image show?"}
]
}
],
]
text = processor.apply_chat_template(messages, add_generation_prompt=True)
url = "https://llava-vl.github.io/static/images/view.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=text, images=image, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=25)
print(processor.decode(output[0]))
```
|
150_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#base-model
|
.md
|
```python
import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
model_id = "meta-llama/Llama-3.2-11B-Vision"
model = MllamaForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16)
processor = AutoProcessor.from_pretrained(model_id)
|
150_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#base-model
|
.md
|
prompt = "<|image|>If I had to write a haiku for this one"
url = "https://llava-vl.github.io/static/images/view.jpg"
raw_image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=raw_image, return_tensors="pt").to(model.device)
output = model.generate(**inputs, do_sample=False, max_new_tokens=25)
print(processor.decode(output[0], skip_special_tokens=True))
```
|
150_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaconfig
|
.md
|
This is the configuration class to store the configuration of a [`MllamaForConditionalGeneration`]. It is used to instantiate an
Mllama model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Mllama-9B.
e.g. [meta-llama/Llama-3.2-11B-Vision](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision)
|
150_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaconfig
|
.md
|
e.g. [meta-llama/Llama-3.2-11B-Vision](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vision_config (`Union[AutoConfig, dict]`, *optional*, defaults to `MllamaVisionConfig`):
The config object or dictionary of the vision backbone.
text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `MllamaTextConfig`):
|
150_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaconfig
|
.md
|
text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `MllamaTextConfig`):
The config object or dictionary of the text backbone.
image_token_index (`int`, *optional*, defaults to 128256):
The image token index to encode the image prompt.
Example:
```python
>>> from transformers import MllamaForConditionalGeneration, MllamaConfig, MllamaVisionConfig, MllamaTextConfig
|
150_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaconfig
|
.md
|
>>> # Initializing a CLIP-vision config
>>> vision_config = MllamaVisionConfig()
>>> # Initializing a Llama config
>>> text_config = MllamaTextConfig()
>>> # Initializing a mllama-11b style configuration
>>> configuration = MllamaConfig(vision_config, text_config)
>>> # Initializing a model from the mllama-11b style configuration
>>> model = MllamaForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
150_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaprocessor
|
.md
|
Constructs a Mllama processor which wraps [`MllamaImageProcessor`] and
[`PretrainedTokenizerFast`] into a single processor that inherits both the image processor and
tokenizer functionalities. See the [`~MllamaProcessor.__call__`] and [`~OwlViTProcessor.decode`] for more
information.
The preferred way of passing kwargs is as a dictionary per modality, see usage example below.
```python
from transformers import MllamaProcessor
from PIL import Image
|
150_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaprocessor
|
.md
|
processor = MllamaProcessor.from_pretrained("meta-llama/Llama-3.2-11B-Vision")
processor(
images=your_pil_image,
text=["<|image|>If I had to write a haiku for this one"],
images_kwargs = {"size": {"height": 448, "width": 448}},
text_kwargs = {"padding": "right"},
common_kwargs = {"return_tensors": "pt"},
)
```
Args:
image_processor ([`MllamaImageProcessor`]):
The image processor is a required input.
tokenizer ([`PreTrainedTokenizer`, `PreTrainedTokenizerFast`]):
The tokenizer is a required input.
|
150_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaimageprocessor
|
.md
|
Constructs a Mllama image processor.
Args:
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB. This is useful if the input image is of a different format e.g. RGBA.
Only has an effect if the input image is in the PIL format.
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image.
size (`Dict[str, int]`, *optional*, defaults to `self.size`):
|
150_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaimageprocessor
|
.md
|
Whether to resize the image.
size (`Dict[str, int]`, *optional*, defaults to `self.size`):
Size of the image tile. Should be a dictionary containing 'height' and 'width' keys, both with integer values.
The height and width values should be equal.
resample (`int`, *optional*, defaults to `Resampling.BILINEAR`):
Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only
has an effect if `do_resize` is set to `True`.
|
150_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaimageprocessor
|
.md
|
has an effect if `do_resize` is set to `True`.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image.
rescale_factor (`float`, *optional*, defaults to 0.0):
Rescale factor to rescale the image by if `do_rescale` is set to `True`.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image.
image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
|
150_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaimageprocessor
|
.md
|
Whether to normalize the image.
image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`.
image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to
`True`.
do_pad (`bool`, *optional*, defaults to `True`):
|
150_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaimageprocessor
|
.md
|
`True`.
do_pad (`bool`, *optional*, defaults to `True`):
Whether or not to pad the images to the largest height and width in the batch.
max_image_tiles (`int`, *optional*, defaults to 4):
The maximum number of tiles to split the image into.
|
150_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaforconditionalgeneration
|
.md
|
The Mllama model which consists of a vision encoder and a language model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
150_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MllamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
150_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaforconditionalgeneration
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
150_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaforcausallm
|
.md
|
The Mllama Text Model with a language modeling head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
150_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaforcausallm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MllamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
150_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaforcausallm
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
150_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamatextmodel
|
.md
|
The Mllama Text Model which consists of transformer with self and cross attention layers.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
150_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamatextmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MllamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
150_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamatextmodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
150_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaforcausallm
|
.md
|
The Mllama Text Model with a language modeling head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
150_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaforcausallm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MllamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
150_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamaforcausallm
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
150_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamavisionmodel
|
.md
|
The Mllama Vision Model which consists of two vision encoders.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
150_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamavisionmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MllamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
150_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mllama.md
|
https://huggingface.co/docs/transformers/en/model_doc/mllama/#mllamavisionmodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
150_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
151_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
151_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#overview
|
.md
|
The BLIP model was proposed in [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
BLIP is a model that is able to perform various multi-modal tasks including:
- Visual Question Answering
- Image-Text retrieval (Image-text matching)
- Image Captioning
The abstract from the paper is the following:
|
151_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#overview
|
.md
|
- Image-Text retrieval (Image-text matching)
- Image Captioning
The abstract from the paper is the following:
*Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks.
|
151_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#overview
|
.md
|
However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the
|
151_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#overview
|
.md
|
to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to
|
151_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#overview
|
.md
|
in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
|
151_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#overview
|
.md
|

This model was contributed by [ybelkada](https://huggingface.co/ybelkada).
The original code can be found [here](https://github.com/salesforce/BLIP).
|
151_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#resources
|
.md
|
- [Jupyter notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) on how to fine-tune BLIP for image captioning on a custom dataset
|
151_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipconfig
|
.md
|
[`BlipConfig`] is the configuration class to store the configuration of a [`BlipModel`]. It is used to instantiate
a BLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating
a configuration with the defaults will yield a similar configuration to that of the BLIP-base
[Salesforce/blip-vqa-base](https://huggingface.co/Salesforce/blip-vqa-base) architecture.
|
151_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipconfig
|
.md
|
[Salesforce/blip-vqa-base](https://huggingface.co/Salesforce/blip-vqa-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`BlipTextConfig`].
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`BlipVisionConfig`].
|
151_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipconfig
|
.md
|
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`BlipVisionConfig`].
projection_dim (`int`, *optional*, defaults to 512):
Dimensionality of text and vision projection layers.
logit_scale_init_value (`float`, *optional*, defaults to 2.6592):
The initial value of the *logit_scale* parameter. Default is used as per the original BLIP implementation.
image_text_hidden_size (`int`, *optional*, defaults to 256):
|
151_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipconfig
|
.md
|
image_text_hidden_size (`int`, *optional*, defaults to 256):
Dimensionality of the hidden state of the image-text fusion layer.
label_smoothing (float, optional, *optional*, defaults to 0.0):
A float in [0.0, 1.0]. Specifies the amount of smoothing when computing the loss, where 0.0 means no smoothing. The targets
become a mixture of the original ground truth and a uniform distribution as described in
|
151_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipconfig
|
.md
|
become a mixture of the original ground truth and a uniform distribution as described in
`Rethinking the Inception Architecture for Computer Vision <https://arxiv.org/abs/1512.00567>`__. Default: :math:`0.0`.
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import BlipConfig, BlipModel
|
151_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipconfig
|
.md
|
>>> # Initializing a BlipConfig with Salesforce/blip-vqa-base style configuration
>>> configuration = BlipConfig()
>>> # Initializing a BlipPModel (with random weights) from the Salesforce/blip-vqa-base style configuration
>>> model = BlipModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a BlipConfig from a BlipTextConfig and a BlipVisionConfig
|
151_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipconfig
|
.md
|
>>> # We can also initialize a BlipConfig from a BlipTextConfig and a BlipVisionConfig
>>> # Initializing a BLIPText and BLIPVision configuration
>>> config_text = BlipTextConfig()
>>> config_vision = BlipVisionConfig()
>>> config = BlipConfig.from_text_vision_configs(config_text, config_vision)
```
Methods: from_text_vision_configs
|
151_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextconfig
|
.md
|
This is the configuration class to store the configuration of a [`BlipTextModel`]. It is used to instantiate a BLIP
text model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the `BlipText` used by the [base
architectures](https://huggingface.co/Salesforce/blip-vqa-base).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
151_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30524):
Vocabulary size of the `Blip` text model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`BlipModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
|
151_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
encoder_hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers from the vision model.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
|
151_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextconfig
|
.md
|
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
|
151_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextconfig
|
.md
|
just in case (e.g., 512 or 1024 or 2048).
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
|
151_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextconfig
|
.md
|
The epsilon used by the layer normalization layers.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
151_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
bos_token_id (`int`, *optional*, defaults to 30522):
The id of the `beginning-of-sequence` token.
eos_token_id (`int`, *optional*, defaults to 2):
The id of the `end-of-sequence` token.
pad_token_id (`int`, *optional*, defaults to 0):
The id of the `padding` token.
sep_token_id (`int`, *optional*, defaults to 102):
The id of the `separator` token.
is_decoder (`bool`, *optional*, defaults to `True`):
|
151_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextconfig
|
.md
|
The id of the `separator` token.
is_decoder (`bool`, *optional*, defaults to `True`):
Whether the model is used as a decoder.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
label_smoothing (float, *optional*):
A float in [0.0, 1.0]. Specifies the amount of smoothing when computing the loss, where 0.0 means no smoothing. The targets
|
151_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextconfig
|
.md
|
A float in [0.0, 1.0]. Specifies the amount of smoothing when computing the loss, where 0.0 means no smoothing. The targets
become a mixture of the original ground truth and a uniform distribution as described in
`Rethinking the Inception Architecture for Computer Vision <https://arxiv.org/abs/1512.00567>`__. Default: :math:`0.0`.
Example:
```python
>>> from transformers import BlipTextConfig, BlipTextModel
|
151_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextconfig
|
.md
|
>>> # Initializing a BlipTextConfig with Salesforce/blip-vqa-base style configuration
>>> configuration = BlipTextConfig()
>>> # Initializing a BlipTextModel (with random weights) from the Salesforce/blip-vqa-base style configuration
>>> model = BlipTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
151_4_9
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.