source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
291_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#overview
|
.md
|
The *ColPali* model was proposed in [ColPali: Efficient Document Retrieval with Vision Language Models](https://doi.org/10.48550/arXiv.2407.01449) by **Manuel Faysse***, **Hugues Sibille***, **Tony Wu***, Bilel Omrani, Gautier Viaud, Céline Hudelot, Pierre Colombo (* denotes equal contribution). Work lead by ILLUIN Technology.
|
291_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#overview
|
.md
|
In our proposed *ColPali* approach, we leverage VLMs to construct efficient multi-vector embeddings directly from document images (“screenshots”) for document retrieval. We train the model to maximize the similarity between these document embeddings and the corresponding query embeddings, using the late interaction method introduced in ColBERT.
|
291_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#overview
|
.md
|
Using *ColPali* removes the need for potentially complex and brittle layout recognition and OCR pipelines with a single model that can take into account both the textual and visual content (layout, charts, etc.) of a document.
|
291_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#resources
|
.md
|
- The *ColPali* arXiv paper can be found [here](https://doi.org/10.48550/arXiv.2407.01449). 📄
- The official blog post detailing ColPali can be found [here](https://huggingface.co/blog/manu/colpali). 📝
- The original model implementation code for the ColPali model and for the `colpali-engine` package can be found [here](https://github.com/illuin-tech/colpali). 🌎
|
291_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#resources
|
.md
|
- Cookbooks for learning to use the transformers-native version of *ColPali*, fine-tuning, and similarity maps generation can be found [here](https://github.com/tonywu71/colpali-cookbooks). 📚
This model was contributed by [@tonywu71](https://huggingface.co/tonywu71) and [@yonigozlan](https://huggingface.co/yonigozlan).
|
291_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#usage
|
.md
|
This example demonstrates how to use *ColPali* to embed both queries and images, calculate their similarity scores, and identify the most relevant matches. For a specific query, you can retrieve the top-k most similar images by selecting the ones with the highest similarity scores.
```python
import torch
from PIL import Image
from transformers import ColPaliForRetrieval, ColPaliProcessor
model_name = "vidore/colpali-v1.2-hf"
|
291_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#usage
|
.md
|
from transformers import ColPaliForRetrieval, ColPaliProcessor
model_name = "vidore/colpali-v1.2-hf"
model = ColPaliForRetrieval.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="cuda:0", # or "mps" if on Apple Silicon
).eval()
processor = ColPaliProcessor.from_pretrained(model_name)
|
291_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#usage
|
.md
|
processor = ColPaliProcessor.from_pretrained(model_name)
# Your inputs (replace dummy images with screenshots of your documents)
images = [
Image.new("RGB", (32, 32), color="white"),
Image.new("RGB", (16, 16), color="black"),
]
queries = [
"What is the organizational structure for our R&D department?",
"Can you provide a breakdown of last year’s financial performance?",
]
|
291_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#usage
|
.md
|
# Process the inputs
batch_images = processor(images=images).to(model.device)
batch_queries = processor(text=queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images).embeddings
query_embeddings = model(**batch_queries).embeddings
# Score the queries against the images
scores = processor.score_retrieval(query_embeddings, image_embeddings)
```
|
291_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliconfig
|
.md
|
Configuration class to store the configuration of a [`ColPaliForRetrieval`]. It is used to instantiate an instance
of `ColPaliForRetrieval` according to the specified arguments, defining the model architecture following the methodology
from the "ColPali: Efficient Document Retrieval with Vision Language Models" paper.
Creating a configuration with the default settings will result in a configuration where the VLM backbone is set to the
|
291_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliconfig
|
.md
|
Creating a configuration with the default settings will result in a configuration where the VLM backbone is set to the
default PaliGemma configuration, i.e the one from [vidore/colpali-v1.2](https://huggingface.co/vidore/colpali-v1.2).
The ColPali config is very similar to [`PaligemmaConfig`], but with an extra attribute defining the embedding dimension.
Note that contrarily to what the class name suggests (actually the name refers to the ColPali **methodology**), you can
|
291_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliconfig
|
.md
|
Note that contrarily to what the class name suggests (actually the name refers to the ColPali **methodology**), you can
use a different VLM backbone model than PaliGemma by passing the corresponding VLM configuration to the class constructor.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vlm_config (`PretrainedConfig`, *optional*):
Configuration of the VLM backbone model.
|
291_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliconfig
|
.md
|
Args:
vlm_config (`PretrainedConfig`, *optional*):
Configuration of the VLM backbone model.
text_config (`PretrainedConfig`, *optional*):
Configuration of the text backbone model. Overrides the `text_config` attribute of the `vlm_config` if provided.
embedding_dim (`int`, *optional*, defaults to 128):
Dimension of the multi-vector embeddings produced by the model.
Example:
```python
from transformers.models.colpali import ColPaliConfig, ColPaliForRetrieval
|
291_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliconfig
|
.md
|
config = ColPaliConfig()
model = ColPaliForRetrieval(config)
```
|
291_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliprocessor
|
.md
|
Constructs a ColPali processor which wraps a PaliGemmaProcessor and special methods to process images and queries, as
well as to compute the late-interaction retrieval score.
[`ColPaliProcessor`] offers all the functionalities of [`PaliGemmaProcessor`]. See the [`~PaliGemmaProcessor.__call__`]
for more information.
Args:
image_processor ([`SiglipImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`], *optional*):
The tokenizer is a required input.
|
291_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliprocessor
|
.md
|
The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`], *optional*):
The tokenizer is a required input.
chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
in a chat into a tokenizable string.
|
291_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliforretrieval
|
.md
|
In our proposed ColPali approach, we leverage VLMs to construct efficient multi-vector embeddings directly
from document images (“screenshots”) for document retrieval. We train the model to maximize the similarity
between these document embeddings and the corresponding query embeddings, using the late interaction method
introduced in ColBERT.
Using ColPali removes the need for potentially complex and brittle layout recognition and OCR pipelines with a
|
291_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
|
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliforretrieval
|
.md
|
Using ColPali removes the need for potentially complex and brittle layout recognition and OCR pipelines with a
single model that can take into account both the textual and visual content (layout, charts, etc.) of a document.
Methods: forward
|
291_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
292_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
292_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#overview
|
.md
|
The SigLIP model was proposed in [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer. SigLIP proposes to replace the loss function used in [CLIP](clip) by a simple pairwise sigmoid loss. This results in better performance in terms of zero-shot classification accuracy on ImageNet.
The abstract from the paper is the following:
|
292_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#overview
|
.md
|
*We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP). Unlike standard contrastive learning with softmax normalization, the sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. The sigmoid loss simultaneously allows further scaling up the batch size, while also performing better at smaller batch sizes. Combined with Locked-image Tuning, with only four TPUv4 chips, we train a SigLiT model that
|
292_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#overview
|
.md
|
better at smaller batch sizes. Combined with Locked-image Tuning, with only four TPUv4 chips, we train a SigLiT model that achieves 84.5% ImageNet zero-shot accuracy in two days. The disentanglement of the batch size from the loss further allows us to study the impact of examples vs pairs and negative to positive ratio. Finally, we push the batch size to the extreme, up to one million, and find that the benefits of growing batch size quickly diminish, with a more reasonable batch size of 32k being
|
292_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#overview
|
.md
|
one million, and find that the benefits of growing batch size quickly diminish, with a more reasonable batch size of 32k being sufficient.*
|
292_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#usage-tips
|
.md
|
- Usage of SigLIP is similar to [CLIP](clip). The main difference is the training loss, which does not require a global view of all the pairwise similarities of images and texts within a batch. One needs to apply the sigmoid activation function to the logits, rather than the softmax.
- Training is supported but does not use `torch.distributed` utilities which may limit the scalability of batch size. However, DDP and FDSP works on single-node multi-gpu setup.
|
292_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#usage-tips
|
.md
|
- When using the standalone [`SiglipTokenizer`] or [`SiglipProcessor`], make sure to pass `padding="max_length"` as that's how the model was trained.
- To get the same results as the pipeline, a prompt template of "This is a photo of {label}." should be used.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/siglip_table.jpeg"
alt="drawing" width="600"/>
|
292_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#usage-tips
|
.md
|
alt="drawing" width="600"/>
<small> SigLIP evaluation results compared to CLIP. Taken from the <a href="https://arxiv.org/abs/2303.15343">original paper</a>.</small>
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/google-research/big_vision/tree/main).
|
292_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#usage-example
|
.md
|
There are 2 main ways to use SigLIP: either using the pipeline API, which abstracts away all the complexity for you, or by using the `SiglipModel` class yourself.
|
292_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#pipeline-api
|
.md
|
The pipeline allows to use the model in a few lines of code:
```python
>>> from transformers import pipeline
>>> from PIL import Image
>>> import requests
>>> # load pipe
>>> image_classifier = pipeline(task="zero-shot-image-classification", model="google/siglip-base-patch16-224")
>>> # load image
>>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
>>> image = Image.open(requests.get(url, stream=True).raw)
|
292_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#pipeline-api
|
.md
|
>>> # inference
>>> candidate_labels = ["2 cats", "a plane", "a remote"]
>>> outputs = image_classifier(image, candidate_labels=candidate_labels)
>>> outputs = [{"score": round(output["score"], 4), "label": output["label"] } for output in outputs]
>>> print(outputs)
[{'score': 0.1979, 'label': '2 cats'}, {'score': 0.0, 'label': 'a remote'}, {'score': 0.0, 'label': 'a plane'}]
```
|
292_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#using-the-model-yourself
|
.md
|
If you want to do the pre- and postprocessing yourself, here's how to do that:
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, AutoModel
>>> import torch
>>> model = AutoModel.from_pretrained("google/siglip-base-patch16-224")
>>> processor = AutoProcessor.from_pretrained("google/siglip-base-patch16-224")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
|
292_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#using-the-model-yourself
|
.md
|
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> candidate_labels = ["2 cats", "2 dogs"]
# follows the pipeline prompt template to get same results
>>> texts = [f'This is a photo of {label}.' for label in candidate_labels]
>>> # important: we pass `padding=max_length` since the model was trained with this
>>> inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt")
|
292_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#using-the-model-yourself
|
.md
|
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image
>>> probs = torch.sigmoid(logits_per_image) # these are the probabilities
>>> print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
31.9% that image 0 is 'a photo of 2 cats'
```
|
292_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SigLIP.
- [Zero-shot image classification task guide](../tasks/zero_shot_image_classification)
- Demo notebooks for SigLIP can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SigLIP). 🌎
|
292_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#resources
|
.md
|
- Demo notebooks for SigLIP can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SigLIP). 🌎
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
292_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#combining-siglip-and-flash-attention-2
|
.md
|
First, make sure to install the latest version of Flash Attention 2.
```bash
pip install -U flash-attn --no-build-isolation
```
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
To load and run a model using Flash Attention 2, refer to the snippet below:
```python
>>> import torch
>>> import requests
|
292_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#combining-siglip-and-flash-attention-2
|
.md
|
To load and run a model using Flash Attention 2, refer to the snippet below:
```python
>>> import torch
>>> import requests
>>> from PIL import Image
>>> from transformers import SiglipProcessor, SiglipModel
>>> device = "cuda" # the device to load the model onto
|
292_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#combining-siglip-and-flash-attention-2
|
.md
|
>>> model = SiglipModel.from_pretrained(
... "google/siglip-so400m-patch14-384",
... attn_implementation="flash_attention_2",
... torch_dtype=torch.float16,
... device_map=device,
... )
>>> processor = SiglipProcessor.from_pretrained("google/siglip-so400m-patch14-384")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
|
292_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#combining-siglip-and-flash-attention-2
|
.md
|
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> candidate_labels = ["2 cats", "2 dogs"]
# follows the pipeline prompt template to get same results
>>> texts = [f'This is a photo of {label}.' for label in candidate_labels]
# important: we pass `padding=max_length` since the model was trained with this
>>> inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt")
>>> inputs.to(device)
|
292_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#combining-siglip-and-flash-attention-2
|
.md
|
>>> with torch.no_grad():
... with torch.autocast(device):
... outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image
>>> probs = torch.sigmoid(logits_per_image) # these are the probabilities
>>> print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
51.3% that image 0 is 'This is a photo of 2 cats.'
```
|
292_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#using-scaled-dot-product-attention-sdpa
|
.md
|
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.
|
292_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#using-scaled-dot-product-attention-sdpa
|
.md
|
page for more information.
You may set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. Make sure you have `torch>=2.1.1`.
```python
>>> from transformers import SiglipModel
|
292_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#using-scaled-dot-product-attention-sdpa
|
.md
|
>>> model = SiglipModel.from_pretrained(
... "google/siglip-so400m-patch14-384",
... attn_implementation="sdpa",
... torch_dtype=torch.float16,
... device_map=device,
... )
```
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
|
292_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#expected-speedups
|
.md
|
Below is an expected speedup diagram that compares inference time between the native implementation in transformers using `google/siglip-so400m-patch14-384` checkpoint in `float16` precision and the Flash Attention 2 / SDPA version of the model using different batch sizes.
<div style="text-align: center">
<img src="https://i.imgur.com/cWm4rsn.png">
</div>
|
292_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipconfig
|
.md
|
[`SiglipConfig`] is the configuration class to store the configuration of a [`SiglipModel`]. It is used to
instantiate a Siglip model according to the specified arguments, defining the text model and vision model configs.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Siglip
[google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) architecture.
|
292_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipconfig
|
.md
|
[google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`SiglipTextConfig`].
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`SiglipVisionConfig`].
|
292_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipconfig
|
.md
|
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`SiglipVisionConfig`].
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import SiglipConfig, SiglipModel
|
292_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipconfig
|
.md
|
>>> # Initializing a SiglipConfig with google/siglip-base-patch16-224 style configuration
>>> configuration = SiglipConfig()
>>> # Initializing a SiglipModel (with random weights) from the google/siglip-base-patch16-224 style configuration
>>> model = SiglipModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
|
292_10_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipconfig
|
.md
|
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a SiglipConfig from a SiglipTextConfig and a SiglipVisionConfig
>>> from transformers import SiglipTextConfig, SiglipVisionConfig
>>> # Initializing a SiglipText and SiglipVision configuration
>>> config_text = SiglipTextConfig()
>>> config_vision = SiglipVisionConfig()
>>> config = SiglipConfig.from_text_vision_configs(config_text, config_vision)
```
Methods: from_text_vision_configs
|
292_10_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextconfig
|
.md
|
This is the configuration class to store the configuration of a [`SiglipTextModel`]. It is used to instantiate a
Siglip text encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the text encoder of the Siglip
[google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) architecture.
|
292_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextconfig
|
.md
|
[google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the Siglip text model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`SiglipModel`].
|
292_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextconfig
|
.md
|
the `inputs_ids` passed when calling [`SiglipModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
|
292_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (`int`, *optional*, defaults to 64):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
|
292_11_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextconfig
|
.md
|
just in case (e.g., 512 or 1024 or 2048).
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
292_11_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
pad_token_id (`int`, *optional*, defaults to 1):
The id of the padding token in the vocabulary.
bos_token_id (`int`, *optional*, defaults to 49406):
The id of the beginning-of-sequence token in the vocabulary.
eos_token_id (`int`, *optional*, defaults to 49407):
The id of the end-of-sequence token in the vocabulary.
Example:
```python
|
292_11_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextconfig
|
.md
|
The id of the end-of-sequence token in the vocabulary.
Example:
```python
>>> from transformers import SiglipTextConfig, SiglipTextModel
|
292_11_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextconfig
|
.md
|
>>> # Initializing a SiglipTextConfig with google/siglip-base-patch16-224 style configuration
>>> configuration = SiglipTextConfig()
>>> # Initializing a SiglipTextModel (with random weights) from the google/siglip-base-patch16-224 style configuration
>>> model = SiglipTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
292_11_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipvisionconfig
|
.md
|
This is the configuration class to store the configuration of a [`SiglipVisionModel`]. It is used to instantiate a
Siglip vision encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the vision encoder of the Siglip
[google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) architecture.
|
292_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipvisionconfig
|
.md
|
[google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
|
292_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipvisionconfig
|
.md
|
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
num_channels (`int`, *optional*, defaults to 3):
|
292_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipvisionconfig
|
.md
|
Number of attention heads for each attention layer in the Transformer encoder.
num_channels (`int`, *optional*, defaults to 3):
Number of channels in the input images.
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
|
292_12_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipvisionconfig
|
.md
|
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
292_12_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipvisionconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
Example:
```python
>>> from transformers import SiglipVisionConfig, SiglipVisionModel
|
292_12_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipvisionconfig
|
.md
|
>>> # Initializing a SiglipVisionConfig with google/siglip-base-patch16-224 style configuration
>>> configuration = SiglipVisionConfig()
>>> # Initializing a SiglipVisionModel (with random weights) from the google/siglip-base-patch16-224 style configuration
>>> model = SiglipVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
292_12_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptokenizer
|
.md
|
Construct a Siglip tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer.
|
292_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptokenizer
|
.md
|
contains the vocabulary necessary to instantiate a tokenizer.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"</s>"`):
The token used for padding, for example when batching sequences of different lengths.
additional_special_tokens (`List[str]`, *optional*):
|
292_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptokenizer
|
.md
|
additional_special_tokens (`List[str]`, *optional*):
Additional special tokens used by the tokenizer.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
|
292_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptokenizer
|
.md
|
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
|
292_13_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptokenizer
|
.md
|
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
model_max_length (`int`, *optional*, defaults to 64):
The maximum length (in number of tokens) for model inputs.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
292_13_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipimageprocessor
|
.md
|
Constructs a SigLIP image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by
`do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"height": 224, "width": 224}`):
Size of the image after resizing. Can be overridden by `size` in the `preprocess` method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
|
292_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipimageprocessor
|
.md
|
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
|
292_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipimageprocessor
|
.md
|
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess`
method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image by the specified mean and standard deviation. Can be overridden by
`do_normalize` in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.5, 0.5, 0.5]`):
|
292_14_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipimageprocessor
|
.md
|
`do_normalize` in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.5, 0.5, 0.5]`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `[0.5, 0.5, 0.5]`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
|
292_14_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipimageprocessor
|
.md
|
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
Methods: preprocess
|
292_14_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipprocessor
|
.md
|
Constructs a Siglip processor which wraps a Siglip image processor and a Siglip tokenizer into a single processor.
[`SiglipProcessor`] offers all the functionalities of [`SiglipImageProcessor`] and [`SiglipTokenizer`]. See the
[`~SiglipProcessor.__call__`] and [`~SiglipProcessor.decode`] for more information.
Args:
image_processor ([`SiglipImageProcessor`]):
The image processor is a required input.
tokenizer ([`SiglipTokenizer`]):
The tokenizer is a required input.
|
292_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipmodel
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
292_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipmodel
|
.md
|
and behavior.
Parameters:
config ([`SiglipConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- get_text_features
- get_image_features
|
292_16_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextmodel
|
.md
|
The text model from SigLIP without any head or projection on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
292_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SiglipConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
292_17_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#sigliptextmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
292_17_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipvisionmodel
|
.md
|
The vision model from SigLIP without any head or projection on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
292_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipvisionmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SiglipConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
292_18_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipvisionmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
292_18_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipforimageclassification
|
.md
|
SigLIP vision encoder with an image classification head on top (a linear layer on top of the pooled final hidden states of
the patch tokens) e.g. for ImageNet.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
292_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipforimageclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SiglipConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
292_19_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
|
https://huggingface.co/docs/transformers/en/model_doc/siglip/#siglipforimageclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
292_19_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
293_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
293_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#overview
|
.md
|
The PLBART model was proposed in [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model `plbart-base` has been trained using multilingual denoising task
on Java, Python and English.
According to the abstract
|
293_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#overview
|
.md
|
on Java, Python and English.
According to the abstract
*Code summarization and generation empower conversion between programming language (PL) and natural language (NL),
while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART,
a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks.
|
293_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#overview
|
.md
|
a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks.
PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding.
Experiments on code summarization in the English language, code generation, and code translation in seven programming languages
show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program
|
293_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#overview
|
.md
|
show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program
repair, clone detection, and vulnerable code detection, demonstrate PLBART's effectiveness in program understanding.
Furthermore, analysis reveals that PLBART learns program syntax, style (e.g., identifier naming convention), logical flow
(e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels
|
293_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#overview
|
.md
|
(e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels
even with limited annotations.*
This model was contributed by [gchhablani](https://huggingface.co/gchhablani). The Authors' code can be found [here](https://github.com/wasiahmad/PLBART).
|
293_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#usage-examples
|
.md
|
PLBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for code-to-text, text-to-code, code-to-code tasks. As the
model is multilingual it expects the sequences in a different format. A special language id token is added in both the
source and target text. The source text format is `X [eos, src_lang_code]` where `X` is the source text. The
target text format is `[tgt_lang_code] X [eos]`. `bos` is never used.
|
293_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#usage-examples
|
.md
|
target text format is `[tgt_lang_code] X [eos]`. `bos` is never used.
However, for fine-tuning, in some cases no language token is provided in cases where a single language is used. Please refer to [the paper](https://arxiv.org/abs/2103.06333) to learn more about this.
In cases where the language code is needed, the regular [`~PLBartTokenizer.__call__`] will encode source text format
when you pass texts as the first argument or with the keyword argument `text`, and will encode target text format if
|
293_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#usage-examples
|
.md
|
when you pass texts as the first argument or with the keyword argument `text`, and will encode target text format if
it's passed with the `text_target` keyword argument.
|
293_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#supervised-training
|
.md
|
```python
>>> from transformers import PLBartForConditionalGeneration, PLBartTokenizer
>>> tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-base", src_lang="en_XX", tgt_lang="python")
>>> example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"
>>> expected_translation_english = "Returns the maximum value of a b c."
>>> inputs = tokenizer(example_python_phrase, text_target=expected_translation_english, return_tensors="pt")
>>> model(**inputs)
```
|
293_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#generation
|
.md
|
While generating the target text set the `decoder_start_token_id` to the target language id. The following
example shows how to translate Python to English using the `uclanlp/plbart-python-en_XX` model.
```python
>>> from transformers import PLBartForConditionalGeneration, PLBartTokenizer
|
293_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/plbart.md
|
https://huggingface.co/docs/transformers/en/model_doc/plbart/#generation
|
.md
|
>>> tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-python-en_XX", src_lang="python", tgt_lang="en_XX")
>>> example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"
>>> inputs = tokenizer(example_python_phrase, return_tensors="pt")
>>> model = PLBartForConditionalGeneration.from_pretrained("uclanlp/plbart-python-en_XX")
>>> translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"])
|
293_4_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.