source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
198_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
198_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#overview
.md
The InstructBLIPVideo is an extension of the models proposed in [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.
198_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#overview
.md
InstructBLIPVideo uses the same architecture as [InstructBLIP](instructblip) and works with the same checkpoints as [InstructBLIP](instructblip). The only difference is the ability to process videos. The abstract from the paper is the following:
198_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#overview
.md
*General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on
198_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#overview
.md
instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the
198_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#overview
.md
zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA
198_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#overview
.md
also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.*
198_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg" alt="drawing" width="600"/> <small> InstructBLIPVideo architecture. Taken from the <a href="https://arxiv.org/abs/2305.06500">original paper.</a> </small> This model was contributed by [RaushanTurganbay](https://huggingface.co/RaushanTurganbay). The original code can be found [here](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip).
198_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#usage-tips
.md
- The model was trained by sampling 4 frames per video, so it's recommended to sample 4 frames > [!NOTE]
198_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#usage-tips
.md
> BLIP models after release v4.46 will raise warnings about adding `processor.num_query_tokens = {{num_query_tokens}}` and expand model embeddings layer to add special `<image>` token. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you. Adding these attributes means that BLIP will add the number of query tokens required per image and expand the text with as many `<image>` placeholders as there will be query tokens.
198_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#usage-tips
.md
of query tokens required per image and expand the text with as many `<image>` placeholders as there will be query tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there wil be failure when merging the embeddings.
198_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#usage-tips
.md
The attributes can be obtained from model config, as `model.config.num_query_tokens` and model embeddings expansion can be done by following [this link](https://gist.github.com/zucchini-nlp/e9f20b054fa322f84ac9311d9ab67042).
198_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoconfig
.md
[`InstructBlipVideoConfig`] is the configuration class to store the configuration of a [`InstructBlipVideoForConditionalGeneration`]. It is used to instantiate a Instructblipvideo model according to the specified arguments, defining the vision model, Q-Former model and language model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the Instructblipvideo
198_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoconfig
.md
the defaults will yield a similar configuration to that of the Instructblipvideo [Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`InstructBlipVideoVisionConfig`].
198_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoconfig
.md
vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`InstructBlipVideoVisionConfig`]. qformer_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`InstructBlipVideoQFormerConfig`]. text_config (`dict`, *optional*): Dictionary of configuration options used to initialize any [`PretrainedConfig`]. num_query_tokens (`int`, *optional*, defaults to 32): The number of query tokens passed through the Transformer.
198_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoconfig
.md
num_query_tokens (`int`, *optional*, defaults to 32): The number of query tokens passed through the Transformer. video_token_index (`int`, *optional*): Token index of special video token. kwargs (*optional*): Dictionary of keyword arguments. Example: ```python >>> from transformers import ( ... InstructBlipVideoVisionConfig, ... InstructBlipVideoQFormerConfig, ... OPTConfig, ... InstructBlipVideoConfig, ... InstructBlipVideoForConditionalGeneration, ... )
198_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoconfig
.md
>>> # Initializing a InstructBlipVideoConfig with Salesforce/instruct-blip-flan-t5 style configuration >>> configuration = InstructBlipVideoConfig() >>> # Initializing a InstructBlipVideoForConditionalGeneration (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration >>> model = InstructBlipVideoForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config
198_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoconfig
.md
>>> # Accessing the model configuration >>> configuration = model.config >>> # We can also initialize a InstructBlipVideoConfig from a InstructBlipVideoVisionConfig, InstructBlipVideoQFormerConfig and any PretrainedConfig >>> # Initializing Instructblipvideo vision, Instructblipvideo Q-Former and language model configurations >>> vision_config = InstructBlipVideoVisionConfig() >>> qformer_config = InstructBlipVideoQFormerConfig() >>> text_config = OPTConfig()
198_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoconfig
.md
>>> config = InstructBlipVideoConfig.from_text_vision_configs(vision_config, qformer_config, text_config) ``` Methods: from_vision_qformer_text_configs
198_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideovisionconfig
.md
This is the configuration class to store the configuration of a [`InstructBlipVideoVisionModel`]. It is used to instantiate a InstructBlipVideo vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration defaults will yield a similar configuration to that of the InstructBlipVideo [Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5) architecture.
198_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideovisionconfig
.md
[Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 1408): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 6144):
198_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideovisionconfig
.md
Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 6144): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 39): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. image_size (`int`, *optional*, defaults to 224):
198_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideovisionconfig
.md
Number of attention heads for each attention layer in the Transformer encoder. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 14): The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
198_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideovisionconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"gelu"` are supported. to 1e-5): The epsilon used by the layer normalization layers. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 1e-10):
198_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideovisionconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 1e-10): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries and values in the self-attention layers. Example: ```python >>> from transformers import InstructBlipVideoVisionConfig, InstructBlipVideoVisionModel
198_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideovisionconfig
.md
>>> # Initializing a InstructBlipVideoVisionConfig with Salesforce/instruct-blip-flan-t5 style configuration >>> configuration = InstructBlipVideoVisionConfig() >>> # Initializing a InstructBlipVideoVisionModel (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration >>> model = InstructBlipVideoVisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
198_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformerconfig
.md
This is the configuration class to store the configuration of a [`InstructBlipVideoQFormerModel`]. It is used to instantiate a InstructBlipVideo Querying Transformer (Q-Former) model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the InstructBlipVideo [Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5)
198_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformerconfig
.md
the InstructBlipVideo [Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Note that [`InstructBlipVideoQFormerModel`] is very similar to [`BertLMHeadModel`] with interleaved cross-attention. Args: vocab_size (`int`, *optional*, defaults to 30522):
198_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformerconfig
.md
Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the Q-Former model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling the model. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
198_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformerconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
198_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformerconfig
.md
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities.
198_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformerconfig
.md
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
198_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformerconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 0): Token id used for padding sequences. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
198_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformerconfig
.md
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
198_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformerconfig
.md
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). cross_attention_frequency (`int`, *optional*, defaults to 2): The frequency of adding cross-attention to the Transformer layers. encoder_hidden_size (`int`, *optional*, defaults to 1408): The hidden size of the hidden states for cross-attention. Examples: ```python >>> from transformers import InstructBlipVideoQFormerConfig, InstructBlipVideoQFormerModel
198_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformerconfig
.md
>>> # Initializing a InstructBlipVideo Salesforce/instruct-blip-flan-t5 style configuration >>> configuration = InstructBlipVideoQFormerConfig() >>> # Initializing a model (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration >>> model = InstructBlipVideoQFormerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
198_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoprocessor
.md
Constructs an InstructBLIPVideo processor which wraps a InstructBLIP image processor and a LLaMa/T5 tokenizer into a single processor. [`InstructBlipVideoProcessor`] offers all the functionalities of [`InstructBlipVideoImageProcessor`] and [`AutoTokenizer`]. See the docstring of [`~InstructBlipVideoProcessor.__call__`] and [`~InstructBlipVideoProcessor.decode`] for more information. Args: image_processor (`InstructBlipVideoImageProcessor`):
198_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoprocessor
.md
Args: image_processor (`InstructBlipVideoImageProcessor`): An instance of [`InstructBlipVideoImageProcessor`]. The image processor is a required input. tokenizer (`AutoTokenizer`): An instance of ['PreTrainedTokenizer`]. The tokenizer is a required input. qformer_tokenizer (`AutoTokenizer`): An instance of ['PreTrainedTokenizer`]. The Q-Former tokenizer is a required input. num_query_tokens (`int`, *optional*): Number of tokens used by the Qformer as queries, should be same as in model's config.
198_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoimageprocessor
.md
Constructs a InstructBLIPVideo image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`dict`, *optional*, defaults to `{"height": 384, "width": 384}`): Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess` method.
198_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoimageprocessor
.md
Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`): Resampling filter to use if resizing the image. Only has an effect if `do_resize` is set to `True`. Can be overridden by the `resample` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
198_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoimageprocessor
.md
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Only has an effect if `do_rescale` is set to `True`. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`):
198_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoimageprocessor
.md
overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of
198_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoimageprocessor
.md
Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
198_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoimageprocessor
.md
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Can be overridden by the `image_std` parameter in the `preprocess` method. do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. Methods: preprocess
198_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideovisionmodel
.md
No docstring available for InstructBlipVideoVisionModel Methods: forward
198_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoqformermodel
.md
Querying Transformer (Q-Former), used in InstructBlipVideo. Slightly modified from BLIP-2 as it also takes the instruction as input. Methods: forward
198_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoforconditionalgeneration
.md
InstructBlipVideo Model for generating text given an image and an optional text prompt. The model consists of a vision encoder, Querying Transformer (Q-Former) and a language model. One can optionally pass `input_ids` to the model, which serve as a text prompt, to make the language model continue the prompt. Otherwise, the language model starts generating text from the [BOS] (beginning-of-sequence) token.
198_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoforconditionalgeneration
.md
the prompt. Otherwise, the language model starts generating text from the [BOS] (beginning-of-sequence) token. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
198_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoforconditionalgeneration
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`InstructBlipVideoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
198_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblipvideo.md
https://huggingface.co/docs/transformers/en/model_doc/instructblipvideo/#instructblipvideoforconditionalgeneration
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - generate
198_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
199_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
199_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#overview
.md
OWLv2 was proposed in [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. OWLv2 scales up [OWL-ViT](owlvit) using self-training, which uses an existing detector to generate pseudo-box annotations on image-text pairs. This results in large gains over the previous state-of-the-art for zero-shot object detection. The abstract from the paper is the following:
199_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#overview
.md
*Open-vocabulary object detection has benefited greatly from pretrained vision-language models, but is still limited by the amount of available detection training data. While detection training data can be expanded by using Web image-text pairs as weak supervision, this has not been done at scales comparable to image-level pretraining. Here, we scale up detection data with self-training, which uses an existing detector to generate pseudo-box annotations on image-text pairs. Major challenges in scaling
199_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#overview
.md
which uses an existing detector to generate pseudo-box annotations on image-text pairs. Major challenges in scaling self-training are the choice of label space, pseudo-annotation filtering, and training efficiency. We present the OWLv2 model and OWL-ST self-training recipe, which address these challenges. OWLv2 surpasses the performance of previous state-of-the-art open-vocabulary detectors already at comparable training scales (~10M examples). However, with OWL-ST, we can scale to over 1B examples,
199_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#overview
.md
detectors already at comparable training scales (~10M examples). However, with OWL-ST, we can scale to over 1B examples, yielding further large improvement: With an L/14 architecture, OWL-ST improves AP on LVIS rare classes, for which the model has seen no human box annotations, from 31.2% to 44.6% (43% relative improvement). OWL-ST unlocks Web-scale training for open-world localization, similar to what has been seen for image classification and language modelling.*
199_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/owlv2_overview.png" alt="drawing" width="600"/> <small> OWLv2 high-level overview. Taken from the <a href="https://arxiv.org/abs/2306.09683">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit).
199_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#usage-example
.md
OWLv2 is, just like its predecessor [OWL-ViT](owlvit), a zero-shot text-conditioned object detection model. OWL-ViT uses [CLIP](clip) as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the
199_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#usage-example
.md
classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection.
199_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#usage-example
.md
[`Owlv2ImageProcessor`] can be used to resize (or rescale) and normalize images for the model and [`CLIPTokenizer`] is used to encode the text. [`Owlv2Processor`] wraps [`Owlv2ImageProcessor`] and [`CLIPTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [`Owlv2Processor`] and [`Owlv2ForObjectDetection`]. ```python >>> import requests >>> from PIL import Image >>> import torch
199_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#usage-example
.md
>>> from transformers import Owlv2Processor, Owlv2ForObjectDetection >>> processor = Owlv2Processor.from_pretrained("google/owlv2-base-patch16-ensemble") >>> model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-base-patch16-ensemble")
199_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#usage-example
.md
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> text_labels = [["a photo of a cat", "a photo of a dog"]] >>> inputs = processor(text=text_labels, images=image, return_tensors="pt") >>> outputs = model(**inputs)
199_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#usage-example
.md
>>> # Target image sizes (height, width) to rescale box predictions [batch_size, 2] >>> target_sizes = torch.tensor([(image.height, image.width)]) >>> # Convert outputs (bounding boxes and class logits) to Pascal VOC format (xmin, ymin, xmax, ymax) >>> results = processor.post_process_grounded_object_detection( ... outputs=outputs, target_sizes=target_sizes, threshold=0.1, text_labels=text_labels ... ) >>> # Retrieve predictions for the first image for the corresponding text queries
199_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#usage-example
.md
... ) >>> # Retrieve predictions for the first image for the corresponding text queries >>> result = results[0] >>> boxes, scores, text_labels = result["boxes"], result["scores"], result["text_labels"] >>> for box, score, text_label in zip(boxes, scores, text_labels): ... box = [round(i, 2) for i in box.tolist()] ... print(f"Detected {text_label} with confidence {round(score.item(), 3)} at location {box}") Detected a photo of a cat with confidence 0.614 at location [341.67, 23.39, 642.32, 371.35]
199_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#usage-example
.md
Detected a photo of a cat with confidence 0.614 at location [341.67, 23.39, 642.32, 371.35] Detected a photo of a cat with confidence 0.665 at location [6.75, 51.96, 326.62, 473.13] ```
199_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#resources
.md
- A demo notebook on using OWLv2 for zero- and one-shot (image-guided) object detection can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/OWLv2). - [Zero-shot object detection task guide](../tasks/zero_shot_object_detection) <Tip>
199_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#resources
.md
- [Zero-shot object detection task guide](../tasks/zero_shot_object_detection) <Tip> The architecture of OWLv2 is identical to [OWL-ViT](owlvit), however the object detection head now also includes an objectness classifier, which predicts the (query-agnostic) likelihood that a predicted box contains an object (as opposed to background). The objectness score can be used to rank or filter predictions independently of text queries.
199_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#resources
.md
Usage of OWLv2 is identical to [OWL-ViT](owlvit) with a new, updated image processor ([`Owlv2ImageProcessor`]). </Tip>
199_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2config
.md
[`Owlv2Config`] is the configuration class to store the configuration of an [`Owlv2Model`]. It is used to instantiate an OWLv2 model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the OWLv2 [google/owlv2-base-patch16](https://huggingface.co/google/owlv2-base-patch16) architecture.
199_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2config
.md
[google/owlv2-base-patch16](https://huggingface.co/google/owlv2-base-patch16) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: text_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`Owlv2TextConfig`]. vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`Owlv2VisionConfig`].
199_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2config
.md
vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`Owlv2VisionConfig`]. projection_dim (`int`, *optional*, defaults to 512): Dimensionality of text and vision projection layers. logit_scale_init_value (`float`, *optional*, defaults to 2.6592): The initial value of the *logit_scale* parameter. Default is used as per the original OWLv2 implementation. return_dict (`bool`, *optional*, defaults to `True`):
199_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2config
.md
implementation. return_dict (`bool`, *optional*, defaults to `True`): Whether or not the model should return a dictionary. If `False`, returns a tuple. kwargs (*optional*): Dictionary of keyword arguments. Methods: from_text_vision_configs
199_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2textconfig
.md
This is the configuration class to store the configuration of an [`Owlv2TextModel`]. It is used to instantiate an Owlv2 text encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Owlv2 [google/owlv2-base-patch16](https://huggingface.co/google/owlv2-base-patch16) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
199_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2textconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 49408): Vocabulary size of the OWLv2 text model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Owlv2TextModel`]. hidden_size (`int`, *optional*, defaults to 512): Dimensionality of the encoder layers and the pooler layer.
199_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2textconfig
.md
hidden_size (`int`, *optional*, defaults to 512): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 2048): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each attention layer in the Transformer encoder.
199_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2textconfig
.md
Number of attention heads for each attention layer in the Transformer encoder. max_position_embeddings (`int`, *optional*, defaults to 16): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
199_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2textconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02):
199_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2textconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float`, *optional*, defaults to 1.0): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). pad_token_id (`int`, *optional*, defaults to 0): The id of the padding token in the input sequences.
199_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2textconfig
.md
testing). pad_token_id (`int`, *optional*, defaults to 0): The id of the padding token in the input sequences. bos_token_id (`int`, *optional*, defaults to 49406): The id of the beginning-of-sequence token in the input sequences. eos_token_id (`int`, *optional*, defaults to 49407): The id of the end-of-sequence token in the input sequences. Example: ```python >>> from transformers import Owlv2TextConfig, Owlv2TextModel
199_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2textconfig
.md
>>> # Initializing a Owlv2TextModel with google/owlv2-base-patch16 style configuration >>> configuration = Owlv2TextConfig() >>> # Initializing a Owlv2TextConfig from the google/owlv2-base-patch16 style configuration >>> model = Owlv2TextModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
199_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2visionconfig
.md
This is the configuration class to store the configuration of an [`Owlv2VisionModel`]. It is used to instantiate an OWLv2 image encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the OWLv2 [google/owlv2-base-patch16](https://huggingface.co/google/owlv2-base-patch16) architecture.
199_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2visionconfig
.md
[google/owlv2-base-patch16](https://huggingface.co/google/owlv2-base-patch16) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 3072):
199_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2visionconfig
.md
Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. num_channels (`int`, *optional*, defaults to 3):
199_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2visionconfig
.md
Number of attention heads for each attention layer in the Transformer encoder. num_channels (`int`, *optional*, defaults to 3): Number of channels in the input images. image_size (`int`, *optional*, defaults to 768): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
199_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2visionconfig
.md
The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
199_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2visionconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float`, *optional*, defaults to 1.0): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). Example: ```python
199_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2visionconfig
.md
testing). Example: ```python >>> from transformers import Owlv2VisionConfig, Owlv2VisionModel
199_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2visionconfig
.md
>>> # Initializing a Owlv2VisionModel with google/owlv2-base-patch16 style configuration >>> configuration = Owlv2VisionConfig() >>> # Initializing a Owlv2VisionModel model from the google/owlv2-base-patch16 style configuration >>> model = Owlv2VisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
199_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2imageprocessor
.md
Constructs an OWLv2 image processor. Args: do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overriden by `do_rescale` in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overriden by `rescale_factor` in the `preprocess` method. do_pad (`bool`, *optional*, defaults to `True`):
199_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2imageprocessor
.md
method. do_pad (`bool`, *optional*, defaults to `True`): Whether to pad the image to a square with gray pixels on the bottom and the right. Can be overriden by `do_pad` in the `preprocess` method. do_resize (`bool`, *optional*, defaults to `True`): Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be overriden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"height": 960, "width": 960}`):
199_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2imageprocessor
.md
by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"height": 960, "width": 960}`): Size to resize the image to. Can be overriden by `size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling method to use if resizing the image. Can be overriden by `resample` in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`):
199_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2imageprocessor
.md
do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `OPENAI_CLIP_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
199_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2imageprocessor
.md
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `OPENAI_CLIP_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Methods: preprocess - post_process_object_detection - post_process_image_guided_detection
199_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2processor
.md
Owlv2Processor Constructs an Owlv2 processor which wraps [`Owlv2ImageProcessor`] and [`CLIPTokenizer`]/[`CLIPTokenizerFast`] into a single processor that interits both the image processor and tokenizer functionalities. See the [`~OwlViTProcessor.__call__`] and [`~OwlViTProcessor.decode`] for more information. Args: image_processor ([`Owlv2ImageProcessor`]): The image processor is a required input. tokenizer ([`CLIPTokenizer`, `CLIPTokenizerFast`]): The tokenizer is a required input. - __call__
199_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2processor
.md
tokenizer ([`CLIPTokenizer`, `CLIPTokenizerFast`]): The tokenizer is a required input. - __call__ - post_process_grounded_object_detection - post_process_image_guided_detection
199_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2model
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
199_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2model
.md
and behavior. Parameters: config ([`Owvl2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - get_text_features - get_image_features
199_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2textmodel
.md
No docstring available for Owlv2TextModel Methods: forward
199_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2visionmodel
.md
No docstring available for Owlv2VisionModel Methods: forward
199_11_0