source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#usage-tips-and-example
|
.md
|
The usage of AltCLIP is very similar to the CLIP. the difference between CLIP is the text encoder. Note that we use bidirectional attention instead of casual attention
and we take the [CLS] token in XLM-R to represent text embedding.
AltCLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image
classification. AltCLIP uses a ViT like transformer to get visual features and a bidirectional language model to get the text
|
288_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#usage-tips-and-example
|
.md
|
classification. AltCLIP uses a ViT like transformer to get visual features and a bidirectional language model to get the text
features. Both the text and visual features are then projected to a latent space with identical dimension. The dot
product between the projected image and text features is then used as a similar score.
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
|
288_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#usage-tips-and-example
|
.md
|
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors
also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder.
The [`CLIPImageProcessor`] can be used to resize (or rescale) and normalize images for the model.
|
288_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#usage-tips-and-example
|
.md
|
The [`CLIPImageProcessor`] can be used to resize (or rescale) and normalize images for the model.
The [`AltCLIPProcessor`] wraps a [`CLIPImageProcessor`] and a [`XLMRobertaTokenizer`] into a single instance to both
encode the text and prepare the images. The following example shows how to get the image-text similarity scores using
[`AltCLIPProcessor`] and [`AltCLIPModel`].
```python
>>> from PIL import Image
>>> import requests
|
288_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#usage-tips-and-example
|
.md
|
>>> from transformers import AltCLIPModel, AltCLIPProcessor
>>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
>>> processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
|
288_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#usage-tips-and-example
|
.md
|
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
<Tip>
This model is based on `CLIPModel`, use it like you would use the original [CLIP](clip).
</Tip>
|
288_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipconfig
|
.md
|
This is the configuration class to store the configuration of a [`AltCLIPModel`]. It is used to instantiate an
AltCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the AltCLIP
[BAAI/AltCLIP](https://huggingface.co/BAAI/AltCLIP) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
288_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`AltCLIPTextConfig`].
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`AltCLIPVisionConfig`].
projection_dim (`int`, *optional*, defaults to 768):
|
288_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipconfig
|
.md
|
projection_dim (`int`, *optional*, defaults to 768):
Dimensionality of text and vision projection layers.
logit_scale_init_value (`float`, *optional*, defaults to 2.6592):
The initial value of the *logit_scale* parameter. Default is used as per the original CLIP implementation.
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import AltCLIPConfig, AltCLIPModel
|
288_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipconfig
|
.md
|
>>> # Initializing a AltCLIPConfig with BAAI/AltCLIP style configuration
>>> configuration = AltCLIPConfig()
>>> # Initializing a AltCLIPModel (with random weights) from the BAAI/AltCLIP style configuration
>>> model = AltCLIPModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a AltCLIPConfig from a AltCLIPTextConfig and a AltCLIPVisionConfig
|
288_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipconfig
|
.md
|
>>> # We can also initialize a AltCLIPConfig from a AltCLIPTextConfig and a AltCLIPVisionConfig
>>> # Initializing a AltCLIPText and AltCLIPVision configuration
>>> config_text = AltCLIPTextConfig()
>>> config_vision = AltCLIPVisionConfig()
>>> config = AltCLIPConfig.from_text_vision_configs(config_text, config_vision)
```
Methods: from_text_vision_configs
|
288_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
This is the configuration class to store the configuration of a [`AltCLIPTextModel`]. It is used to instantiate a
AltCLIP text model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the AltCLIP
[BAAI/AltCLIP](https://huggingface.co/BAAI/AltCLIP) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
288_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 250002):
Vocabulary size of the AltCLIP model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`AltCLIPTextModel`].
hidden_size (`int`, *optional*, defaults to 1024):
Dimensionality of the encoder layers and the pooler layer.
|
288_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 1024):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
|
288_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
288_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 514):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 1):
|
288_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 1):
The vocabulary size of the `token_type_ids` passed when calling [`AltCLIPTextModel`]
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 0.02):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
|
288_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
pad_token_id (`int`, *optional*, defaults to 1): The id of the *padding* token.
bos_token_id (`int`, *optional*, defaults to 0): The id of the *beginning-of-sequence* token.
eos_token_id (`Union[int, List[int]]`, *optional*, defaults to 2):
|
288_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
eos_token_id (`Union[int, List[int]]`, *optional*, defaults to 2):
The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
|
288_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
use_cache (`bool`, *optional*, defaults to `True`):
|
288_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
project_dim (`int`, *optional*, defaults to 768):
The dimensions of the teacher model before the mapping layer.
Examples:
```python
>>> from transformers import AltCLIPTextModel, AltCLIPTextConfig
|
288_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
|
.md
|
>>> # Initializing a AltCLIPTextConfig with BAAI/AltCLIP style configuration
>>> configuration = AltCLIPTextConfig()
>>> # Initializing a AltCLIPTextModel (with random weights) from the BAAI/AltCLIP style configuration
>>> model = AltCLIPTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
288_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipvisionconfig
|
.md
|
This is the configuration class to store the configuration of a [`AltCLIPModel`]. It is used to instantiate an
AltCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the AltCLIP
[BAAI/AltCLIP](https://huggingface.co/BAAI/AltCLIP) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
288_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipvisionconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
projection_dim (`int`, *optional*, defaults to 512):
|
288_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipvisionconfig
|
.md
|
projection_dim (`int`, *optional*, defaults to 512):
Dimensionality of text and vision projection layers.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
image_size (`int`, *optional*, defaults to 224):
|
288_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipvisionconfig
|
.md
|
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 32):
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
|
288_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipvisionconfig
|
.md
|
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 1.0):
|
288_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipvisionconfig
|
.md
|
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
Example:
```python
>>> from transformers import AltCLIPVisionConfig, AltCLIPVisionModel
|
288_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipvisionconfig
|
.md
|
>>> # Initializing a AltCLIPVisionConfig with BAAI/AltCLIP style configuration
>>> configuration = AltCLIPVisionConfig()
>>> # Initializing a AltCLIPVisionModel (with random weights) from the BAAI/AltCLIP style configuration
>>> model = AltCLIPVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
288_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipprocessor
|
.md
|
Constructs a AltCLIP processor which wraps a CLIP image processor and a XLM-Roberta tokenizer into a single
processor.
[`AltCLIPProcessor`] offers all the functionalities of [`CLIPImageProcessor`] and [`XLMRobertaTokenizerFast`]. See
the [`~AltCLIPProcessor.__call__`] and [`~AltCLIPProcessor.decode`] for more information.
Args:
image_processor ([`CLIPImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`XLMRobertaTokenizerFast`], *optional*):
|
288_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipprocessor
|
.md
|
The image processor is a required input.
tokenizer ([`XLMRobertaTokenizerFast`], *optional*):
The tokenizer is a required input.
|
288_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipmodel
|
.md
|
No docstring available for AltCLIPModel
Methods: forward
- get_text_features
- get_image_features
|
288_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextmodel
|
.md
|
No docstring available for AltCLIPTextModel
Methods: forward
|
288_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipvisionmodel
|
.md
|
No docstring available for AltCLIPVisionModel
Methods: forward
|
288_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
289_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
289_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#overview
|
.md
|
The PaliGemma model was proposed in [PaliGemma – Google's Cutting-Edge Open Vision Language Model](https://huggingface.co/blog/paligemma) by Google. It is a 3B vision-language model composed by a [SigLIP](siglip) vision encoder and a [Gemma](gemma) language decoder linked by a multimodal linear projection. It cuts an image into a fixed number of VIT tokens and prepends it to an optional prompt. One particularity is that the model uses full block attention on all the image tokens plus the input text tokens.
|
289_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#overview
|
.md
|
prompt. One particularity is that the model uses full block attention on all the image tokens plus the input text tokens. It comes in 3 resolutions, 224x224, 448x448 and 896x896 with 3 base models, with 55 fine-tuned versions for different tasks, and 2 mix models.
|
289_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/paligemma/paligemma_arch.png"
alt="drawing" width="600"/>
<small> PaliGemma architecture. Taken from the <a href="https://huggingface.co/blog/paligemma">blog post.</a> </small>
This model was contributed by [Molbap](https://huggingface.co/Molbap).
|
289_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#usage-tips
|
.md
|
- PaliGemma is not meant for conversational use, and it works best when fine-tuning to a specific use case. Some downstream tasks on which PaliGemma can be fine-tuned include image captioning, visual question answering (VQA), object detection, referring expression segmentation and document understanding.
|
289_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#usage-tips
|
.md
|
- One can use `PaliGemmaProcessor` to prepare images, text and optional labels for the model. When fine-tuning a PaliGemma model, the `suffix` argument can be passed to the processor which creates the `labels` for the model:
```python
prompt = "What is on the flower?"
answer = "a bee"
inputs = processor(images=raw_image, text=prompt, suffix=answer, return_tensors="pt")
```
|
289_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#usage-example
|
.md
|
The model can accept a single or multiple images. According to the [paper](https://arxiv.org/abs/2407.07726v1), the checkpoint PaliGemma can transfer to tasks which take multiple images as input. NLVR2 is one such task, which asks one question about two images, and requires looking at both to give the correct answer. Here's an example code for single and multi image inference.
|
289_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#single-image-inference
|
.md
|
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
model_id = "google/paligemma-3b-mix-224"
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
|
289_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#single-image-inference
|
.md
|
prompt = "What is on the flower?"
image_file = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg?download=true"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(raw_image, prompt, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=20)
print(processor.decode(output[0], skip_special_tokens=True)[inputs.input_ids.shape[1]: ])
```
|
289_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#multi-image-inference
|
.md
|
```python
model_id = "google/paligemma-3b-ft-nlvr2-448" # checkpoint tuned for multiple images
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id)
processor = PaliGemmaProcessor.from_pretrained(model_id)
|
289_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#multi-image-inference
|
.md
|
prompt = "answer en Which of the two pictures shows a snowman, first or second?"
stop_sign_image = Image.open(
requests.get("https://www.ilankelman.org/stopsigns/australia.jpg", stream=True).raw
)
snow_image = Image.open(
requests.get(
"https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg", stream=True
).raw
)
inputs = processor(images=[[snow_image, stop_sign_image]], text=prompt, return_tensors="pt")
|
289_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#multi-image-inference
|
.md
|
inputs = processor(images=[[snow_image, stop_sign_image]], text=prompt, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=20)
print(processor.decode(output[0], skip_special_tokens=True)[inputs.input_ids.shape[1]: ])
```
|
289_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with PaliGemma. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
- A blog post introducing all the features of PaliGemma can be found [here](https://huggingface.co/blog/paligemma).
|
289_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#resources
|
.md
|
- A blog post introducing all the features of PaliGemma can be found [here](https://huggingface.co/blog/paligemma).
- Demo notebooks on how to fine-tune PaliGemma for VQA with the Trainer API along with inference can be found [here](https://github.com/huggingface/notebooks/tree/main/examples/paligemma).
|
289_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#resources
|
.md
|
- Demo notebooks on how to fine-tune PaliGemma on a custom dataset (receipt image -> JSON) along with inference can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/PaliGemma). 🌎
|
289_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaconfig
|
.md
|
This is the configuration class to store the configuration of a [`PaliGemmaForConditionalGeneration`]. It is used to instantiate an
PaliGemmamodel according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the PaliGemma-2B.
e.g. [paligemma-hf/paligemma-2b](https://huggingface.co/paligemma-hf/paligemma-2b)
|
289_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaconfig
|
.md
|
e.g. [paligemma-hf/paligemma-2b](https://huggingface.co/paligemma-hf/paligemma-2b)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vision_config (`PaliGemmaVisionConfig`, *optional*):
Custom vision config or dict
text_config (`Union[AutoConfig, dict]`, *optional*):
The config object of the text backbone. Can be any of `LlamaConfig` or `MistralConfig`.
|
289_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaconfig
|
.md
|
The config object of the text backbone. Can be any of `LlamaConfig` or `MistralConfig`.
ignore_index (`int`, *optional*, defaults to -100):
The ignore index for the loss function.
image_token_index (`int`, *optional*, defaults to 256000):
The image token index to encode the image prompt.
vocab_size (`int`, *optional*, defaults to 257152):
Vocabulary size of the PaliGemmamodel. Defines the number of different tokens that can be represented by the
|
289_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaconfig
|
.md
|
Vocabulary size of the PaliGemmamodel. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`~PaliGemmaForConditionalGeneration`]
projection_dim (`int`, *optional*, defaults to 2048):
Dimension of the multimodal projection space.
hidden_size (`int`, *optional*, defaults to 2048):
Dimension of the hidden layer of the Language model.
Example:
```python
|
289_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 2048):
Dimension of the hidden layer of the Language model.
Example:
```python
>>> from transformers import PaliGemmaForConditionalGeneration, PaliGemmaConfig, SiglipVisionConfig, GemmaConfig
|
289_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaconfig
|
.md
|
>>> # Initializing a Siglip-like vision config
>>> vision_config = SiglipVisionConfig()
>>> # Initializing a PaliGemma config
>>> text_config = GemmaConfig()
>>> # Initializing a PaliGemma paligemma-3b-224 style configuration
>>> configuration = PaliGemmaConfig(vision_config, text_config)
>>> # Initializing a model from the paligemma-3b-224 style configuration
>>> model = PaliGemmaForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
289_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaprocessor
|
.md
|
Constructs a PaliGemma processor which wraps a PaliGemma image processor and a PaliGemma tokenizer into a single processor.
[`PaliGemmaProcessor`] offers all the functionalities of [`SiglipImageProcessor`] and [`GemmaTokenizerFast`]. See the
[`~PaliGemmaProcessor.__call__`] and [`~PaliGemmaProcessor.decode`] for more information.
Args:
image_processor ([`SiglipImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`GemmaTokenizerFast`], *optional*):
|
289_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaprocessor
|
.md
|
The image processor is a required input.
tokenizer ([`GemmaTokenizerFast`], *optional*):
The tokenizer is a required input.
chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
in a chat into a tokenizable string.
|
289_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaforconditionalgeneration
|
.md
|
The PALIGEMMA model which consists of a vision backbone and a language model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
289_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`PaliGemmaConfig`] or [`PaliGemmaVisionConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
289_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaforconditionalgeneration
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
289_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
290_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
290_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#overview
|
.md
|
The [`EncoderDecoderModel`] can be used to initialize a sequence-to-sequence model with any
pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks
was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by
Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
|
290_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#overview
|
.md
|
Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
After such an [`EncoderDecoderModel`] has been trained/fine-tuned, it can be saved/loaded just like
any other models (see the examples for more information).
An application of this architecture could be to leverage two pretrained [`BertModel`] as the encoder
and decoder for a summarization model as was shown in: [Text Summarization with Pretrained Encoders](https://arxiv.org/abs/1908.08345) by Yang Liu and Mirella Lapata.
|
290_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#randomly-initializing-encoderdecodermodel-from-model-configurations
|
.md
|
[`EncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`BertModel`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder.
```python
>>> from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel
>>> config_encoder = BertConfig()
>>> config_decoder = BertConfig()
|
290_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#randomly-initializing-encoderdecodermodel-from-model-configurations
|
.md
|
>>> config_encoder = BertConfig()
>>> config_decoder = BertConfig()
>>> config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
>>> model = EncoderDecoderModel(config=config)
```
|
290_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#initialising-encoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
|
.md
|
[`EncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained auto-encoding model, *e.g.* BERT, can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder.
|
290_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#initialising-encoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
|
.md
|
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing [`EncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder).
To do so, the `EncoderDecoderModel` class provides a [`EncoderDecoderModel.from_encoder_decoder_pretrained`] method.
|
290_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#initialising-encoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
|
.md
|
To do so, the `EncoderDecoderModel` class provides a [`EncoderDecoderModel.from_encoder_decoder_pretrained`] method.
```python
>>> from transformers import EncoderDecoderModel, BertTokenizer
|
290_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#initialising-encoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
|
.md
|
>>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased")
```
|
290_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#loading-an-existing-encoderdecodermodel-checkpoint-and-perform-inference
|
.md
|
To load fine-tuned checkpoints of the `EncoderDecoderModel` class, [`EncoderDecoderModel`] provides the `from_pretrained(...)` method just like any other model architecture in Transformers.
To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
```python
>>> from transformers import AutoTokenizer, EncoderDecoderModel
|
290_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#loading-an-existing-encoderdecodermodel-checkpoint-and-perform-inference
|
.md
|
>>> # load a fine-tuned seq2seq model and corresponding tokenizer
>>> model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail")
>>> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail")
|
290_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#loading-an-existing-encoderdecodermodel-checkpoint-and-perform-inference
|
.md
|
>>> # let's perform inference on a long piece of text
>>> ARTICLE_TO_SUMMARIZE = (
... "PG&E stated it scheduled the blackouts in response to forecasts for high winds "
... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
... )
>>> input_ids = tokenizer(ARTICLE_TO_SUMMARIZE, return_tensors="pt").input_ids
|
290_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#loading-an-existing-encoderdecodermodel-checkpoint-and-perform-inference
|
.md
|
>>> # autoregressively generate summary (uses greedy decoding by default)
>>> generated_ids = model.generate(input_ids)
>>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
>>> print(generated_text)
nearly 800 thousand customers were affected by the shutoffs. the aim is to reduce the risk of wildfires. nearly 800, 000 customers were expected to be affected by high winds amid dry conditions. pg & e said it scheduled the blackouts to last through at least midday tomorrow.
|
290_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#loading-an-existing-encoderdecodermodel-checkpoint-and-perform-inference
|
.md
|
```
|
290_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#loading-a-pytorch-checkpoint-into-tfencoderdecodermodel
|
.md
|
[`TFEncoderDecoderModel.from_pretrained`] currently doesn't support initializing the model from a
pytorch checkpoint. Passing `from_pt=True` to this method will throw an exception. If there are only pytorch
checkpoints for a particular encoder-decoder model, a workaround is:
```python
>>> # a workaround to load from pytorch checkpoint
>>> from transformers import EncoderDecoderModel, TFEncoderDecoderModel
>>> _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16")
|
290_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#loading-a-pytorch-checkpoint-into-tfencoderdecodermodel
|
.md
|
>>> _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16")
>>> _model.encoder.save_pretrained("./encoder")
>>> _model.decoder.save_pretrained("./decoder")
>>> model = TFEncoderDecoderModel.from_encoder_decoder_pretrained(
... "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True
... )
>>> # This is only for copying some specific attributes of this particular model.
>>> model.config = _model.config
```
|
290_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#training
|
.md
|
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model.
As you can see, only 2 inputs are required for the model in order to compute a loss: `input_ids` (which are the
`input_ids` of the encoded input sequence) and `labels` (which are the `input_ids` of the encoded
target sequence).
```python
>>> from transformers import BertTokenizer, EncoderDecoderModel
|
290_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#training
|
.md
|
>>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased")
>>> model.config.decoder_start_token_id = tokenizer.cls_token_id
>>> model.config.pad_token_id = tokenizer.pad_token_id
|
290_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#training
|
.md
|
>>> input_ids = tokenizer(
|
290_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#training
|
.md
|
... "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side.During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a
|
290_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#training
|
.md
|
in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft).Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.",
|
290_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#training
|
.md
|
... return_tensors="pt",
... ).input_ids
|
290_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#training
|
.md
|
>>> labels = tokenizer(
... "the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2 metres ( 17 ft ) and is the second tallest free - standing structure in paris.",
... return_tensors="pt",
... ).input_ids
|
290_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#training
|
.md
|
>>> # the forward function automatically creates the correct decoder_input_ids
>>> loss = model(input_ids=input_ids, labels=labels).loss
```
Detailed [colab](https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing#scrollTo=ZwQIEhKOrJpl) for training.
This model was contributed by [thomwolf](https://github.com/thomwolf). This model's TensorFlow and Flax versions
were contributed by [ydshieh](https://github.com/ydshieh).
|
290_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecoderconfig
|
.md
|
[`EncoderDecoderConfig`] is the configuration class to store the configuration of a [`EncoderDecoderModel`]. It is
used to instantiate an Encoder Decoder model according to the specified arguments, defining the encoder and decoder
configs.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
kwargs (*optional*):
Dictionary of keyword arguments. Notably:
|
290_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecoderconfig
|
.md
|
Args:
kwargs (*optional*):
Dictionary of keyword arguments. Notably:
- **encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines
the encoder config.
- **decoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines
the decoder config.
Examples:
```python
>>> from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel
|
290_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecoderconfig
|
.md
|
>>> # Initializing a BERT google-bert/bert-base-uncased style configuration
>>> config_encoder = BertConfig()
>>> config_decoder = BertConfig()
>>> config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
>>> # Initializing a Bert2Bert model (with random weights) from the google-bert/bert-base-uncased style configurations
>>> model = EncoderDecoderModel(config=config)
|
290_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecoderconfig
|
.md
|
>>> # Accessing the model configuration
>>> config_encoder = model.config.encoder
>>> config_decoder = model.config.decoder
>>> # set decoder config to causal lm
>>> config_decoder.is_decoder = True
>>> config_decoder.add_cross_attention = True
>>> # Saving the model, including its configuration
>>> model.save_pretrained("my-model")
|
290_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecoderconfig
|
.md
|
>>> # Saving the model, including its configuration
>>> model.save_pretrained("my-model")
>>> # loading model and config from pretrained folder
>>> encoder_decoder_config = EncoderDecoderConfig.from_pretrained("my-model")
>>> model = EncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config)
```
<frameworkcontent>
<pt>
|
290_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecodermodel
|
.md
|
This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the
encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via
[`~AutoModel.from_pretrained`] function and the decoder is loaded via [`~AutoModelForCausalLM.from_pretrained`]
function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream
generative task, like summarization.
|
290_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecodermodel
|
.md
|
generative task, like summarization.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
Zhou, Wei Li, Peter J. Liu.
After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models
|
290_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecodermodel
|
.md
|
After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models
(see the examples for more information).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
290_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecodermodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`EncoderDecoderConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
290_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecodermodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
[`EncoderDecoderModel`] is a generic model class that will be instantiated as a transformer architecture with one
of the base model classes of the library as encoder and another one as decoder when created with the
:meth*~transformers.AutoModel.from_pretrained* class method for the encoder and
|
290_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecodermodel
|
.md
|
:meth*~transformers.AutoModel.from_pretrained* class method for the encoder and
:meth*~transformers.AutoModelForCausalLM.from_pretrained* class method for the decoder.
Methods: forward
- from_encoder_decoder_pretrained
</pt>
<tf>
|
290_8_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#tfencoderdecodermodel
|
.md
|
No docstring available for TFEncoderDecoderModel
Methods: call
- from_encoder_decoder_pretrained
</tf>
<jax>
|
290_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#flaxencoderdecodermodel
|
.md
|
No docstring available for FlaxEncoderDecoderModel
Methods: __call__
- from_encoder_decoder_pretrained
</jax>
</frameworkcontent>
|
290_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
291_0_0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.