source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
311_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
311_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#overview
|
.md
|
The VipLlava model was proposed in [Making Large Multimodal Models Understand Arbitrary Visual Prompts](https://arxiv.org/abs/2312.00784) by Mu Cai, Haotian Liu, Siva Karthik Mustikovela, Gregory P. Meyer, Yuning Chai, Dennis Park, Yong Jae Lee.
VipLlava enhances the training protocol of Llava by marking images and interact with the model using natural cues like a "red bounding box" or "pointed arrow" during training.
The abstract from the paper is the following:
|
311_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#overview
|
.md
|
*While existing large vision-language multimodal models focus on whole image understanding, there is a prominent gap in achieving region-specific comprehension. Current approaches that use textual coordinates or spatial encodings often fail to provide a user-friendly interface for visual prompting. To address this challenge, we introduce a novel multimodal model capable of decoding arbitrary visual prompts. This allows users to intuitively mark images and interact with the model using natural cues like a
|
311_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#overview
|
.md
|
arbitrary visual prompts. This allows users to intuitively mark images and interact with the model using natural cues like a "red bounding box" or "pointed arrow". Our simple design directly overlays visual markers onto the RGB image, eliminating the need for complex region encodings, yet achieves state-of-the-art performance on region-understanding tasks like Visual7W, PointQA, and Visual Commonsense Reasoning benchmark. Furthermore, we present ViP-Bench, a comprehensive benchmark to assess the capability
|
311_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#overview
|
.md
|
Visual Commonsense Reasoning benchmark. Furthermore, we present ViP-Bench, a comprehensive benchmark to assess the capability of models in understanding visual prompts across multiple dimensions, enabling future research in this domain. Code, data, and model are publicly available.*
|
311_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#overview
|
.md
|
The original code can be found [here](https://github.com/mu-cai/ViP-LLaVA).
This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada)
|
311_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#usage-tips
|
.md
|
- The architecture is similar than llava architecture except that the multi-modal projector takes a set of concatenated vision hidden states and has an additional layernorm layer on that module.
- We advise users to use `padding_side="left"` when computing batched generation as it leads to more accurate results. Simply make sure to call `processor.tokenizer.padding_side = "left"` before generating.
|
311_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#usage-tips
|
.md
|
- Note the model has not been explicitly trained to process multiple images in the same prompt, although this is technically possible, you may experience inaccurate results.
> [!NOTE]
|
311_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#usage-tips
|
.md
|
> [!NOTE]
> LLaVA models after release v4.46 will raise warnings about adding `processor.patch_size = {{patch_size}}`, `processor.num_additional_image_tokens = {{num_additional_image_tokens}}` and processor.vision_feature_select_strategy = {{vision_feature_select_strategy}}`. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you.
|
311_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#usage-tips
|
.md
|
Adding these attributes means that LLaVA will try to infer the number of image tokens required per image and expand the text with as many `<image>` placeholders as there will be tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there will be failure when merging the embeddings.
|
311_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#usage-tips
|
.md
|
The attributes can be obtained from model config, as `model.config.vision_config.patch_size` or `model.config.vision_feature_select_strategy`. The `num_additional_image_tokens` should be `1` if the vision backbone adds a CLS token or `0` if nothing extra is added to the vision patches.
|
311_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#usage-tips
|
.md
|
- For better results, we recommend users to use the processor's `apply_chat_template()` method to format your prompt correctly. For that you need to construct a conversation history, passing in a plain string will not format your prompt. Each message in the conversation history for chat templates is a dictionary with keys "role" and "content". The "content" should be a list of dictionaries, for "text" and "image" modalities, as follows:
```python
from transformers import AutoProcessor
|
311_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#usage-tips
|
.md
|
processor = AutoProcessor.from_pretrained("llava-hf/vip-llava-7b-hf")
conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What’s shown in this image?"},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": "This image shows a red stop sign."},]
},
{
"role": "user",
"content": [
{"type": "text", "text": "Describe the image in more details."},
],
},
]
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
|
311_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#usage-tips
|
.md
|
# Note that the template simply formats your prompt, you still have to tokenize it and obtain pixel values for your images
print(text_prompt)
>>> "###Human: <image>\nWhat’s shown in this image?###Assistant: This image shows a red stop sign.###Human: Describe the image in more details.###Assistant:"
```
- If you want to construct a chat prompt yourself, below is a list of prompt formats accepted by VipLLaVa checkpoints:
```bash
|
311_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#usage-tips
|
.md
|
- If you want to construct a chat prompt yourself, below is a list of prompt formats accepted by VipLLaVa checkpoints:
```bash
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: <image>\n<prompt>###Assistant:
```
For multiple turns conversation:
```bash
|
311_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#usage-tips
|
.md
|
```
For multiple turns conversation:
```bash
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: <image>\n<prompt1>###Assistant: <answer1>###Human: <prompt2>###Assistant:
```
|
311_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#vipllavaconfig
|
.md
|
This is the configuration class to store the configuration of a [`VipLlavaForConditionalGeneration`]. It is used to instantiate an
VipLlava model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the VipLlava-9B.
e.g. [ybelkada/vip-llava-7b-hf](https://huggingface.co/ybelkada/vip-llava-7b-hf)
|
311_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#vipllavaconfig
|
.md
|
e.g. [ybelkada/vip-llava-7b-hf](https://huggingface.co/ybelkada/vip-llava-7b-hf)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vision_config (`VipLlavaVisionConfig`, *optional*):
Custom vision config or dict
text_config (`Union[AutoConfig, dict]`, *optional*):
The config object of the text backbone. Can be any of `LlamaConfig` or `MistralConfig`.
|
311_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#vipllavaconfig
|
.md
|
The config object of the text backbone. Can be any of `LlamaConfig` or `MistralConfig`.
ignore_index (`int`, *optional*, defaults to -100):
The ignore index for the loss function.
image_token_index (`int`, *optional*, defaults to 32000):
The image token index to encode the image prompt.
projector_hidden_act (`str`, *optional*, defaults to `"gelu"`):
The activation function used by the multimodal projector.
projector_layernorm_eps (`float`, *optional*, defaults to 1e-05):
|
311_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#vipllavaconfig
|
.md
|
The activation function used by the multimodal projector.
projector_layernorm_eps (`float`, *optional*, defaults to 1e-05):
The layer norm epsilon of the projector layernorm
vision_feature_layers (`List[int]`, *optional*, defaults to `[-2, -5, -8, -11, 6]`):
The list of layers to select the vision features from.
image_seq_length (`int`, *optional*, defaults to 576):
Sequence length of one image embedding.
Example:
```python
|
311_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#vipllavaconfig
|
.md
|
image_seq_length (`int`, *optional*, defaults to 576):
Sequence length of one image embedding.
Example:
```python
>>> from transformers import VipLlavaForConditionalGeneration, VipLlavaConfig, CLIPVisionConfig, LlamaConfig
|
311_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#vipllavaconfig
|
.md
|
>>> # Initializing a CLIP-vision config
>>> vision_config = CLIPVisionConfig()
>>> # Initializing a Llama config
>>> text_config = LlamaConfig()
>>> # Initializing a VipLlava vipllava-7b style configuration
>>> configuration = VipLlavaConfig(vision_config, text_config)
>>> # Initializing a model from the vipllava-7b style configuration
>>> model = VipLlavaForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
311_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#vipllavaforconditionalgeneration
|
.md
|
The VIPLLAVA model which consists of a vision backbone and a language model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
311_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#vipllavaforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`VipLlavaConfig`] or [`VipLlavaVisionConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
311_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vipllava.md
|
https://huggingface.co/docs/transformers/en/model_doc/vipllava/#vipllavaforconditionalgeneration
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
311_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
312_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
312_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#updated-tokenizer-behavior
|
.md
|
**DISCLAIMER:** The default behaviour for the tokenizer was fixed and thus changed in April 2023.
The previous version adds `[self.eos_token_id, self.cur_lang_code]` at the end of the token sequence for both target and source tokenization. This is wrong as the NLLB paper mentions (page 48, 6.1.1. Model Architecture) :
*Note that we prefix the source sequence with the source language, as opposed to the target
language as previously done in several works (Arivazhagan et al., 2019; Johnson et al.,
|
312_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#updated-tokenizer-behavior
|
.md
|
language as previously done in several works (Arivazhagan et al., 2019; Johnson et al.,
2017). This is primarily because we prioritize optimizing zero-shot performance of our
model on any pair of 200 languages at a minor cost to supervised performance.*
Previous behaviour:
```python
>>> from transformers import NllbTokenizer
|
312_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#updated-tokenizer-behavior
|
.md
|
>>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
>>> tokenizer("How was your day?").input_ids
[13374, 1398, 4260, 4039, 248130, 2, 256047]
>>> # 2: '</s>'
>>> # 256047 : 'eng_Latn'
```
New behaviour
```python
>>> from transformers import NllbTokenizer
|
312_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#updated-tokenizer-behavior
|
.md
|
>>> # 2: '</s>'
>>> # 256047 : 'eng_Latn'
```
New behaviour
```python
>>> from transformers import NllbTokenizer
>>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
>>> tokenizer("How was your day?").input_ids
[256047, 13374, 1398, 4260, 4039, 248130, 2]
```
Enabling the old behaviour can be done as follows:
```python
>>> from transformers import NllbTokenizer
|
312_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#updated-tokenizer-behavior
|
.md
|
>>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", legacy_behaviour=True)
```
For more details, feel free to check the linked [PR](https://github.com/huggingface/transformers/pull/22313) and [Issue](https://github.com/huggingface/transformers/issues/19943).
|
312_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#overview
|
.md
|
The NLLB model was presented in [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by Marta R. Costa-jussà, James Cross, Onur Çelebi,
Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula,
|
312_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#overview
|
.md
|
Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews,
Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers,
Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
The abstract of the paper is the following:
|
312_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#overview
|
.md
|
Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
The abstract of the paper is the following:
*Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today.
However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the
|
312_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#overview
|
.md
|
200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by
first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed
|
312_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#overview
|
.md
|
at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of
Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training
|
312_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#overview
|
.md
|
improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using
a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety.
|
312_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#overview
|
.md
|
Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.*
This implementation contains the dense models available on release.
**The sparse model NLLB-MoE (Mixture of Expert) is now available! More details [here](nllb-moe)**
This model was contributed by [Lysandre](https://huggingface.co/lysandre). The authors' code can be found [here](https://github.com/facebookresearch/fairseq/tree/nllb).
|
312_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#generating-with-nllb
|
.md
|
While generating the target text set the `forced_bos_token_id` to the target language id. The following
example shows how to translate English to French using the *facebook/nllb-200-distilled-600M* model.
Note that we're using the BCP-47 code for French `fra_Latn`. See [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200)
for the list of all BCP-47 in the Flores 200 dataset.
```python
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
312_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#generating-with-nllb
|
.md
|
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
>>> article = "UN Chief says there is no military solution in Syria"
>>> inputs = tokenizer(article, return_tensors="pt")
|
312_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#generating-with-nllb
|
.md
|
>>> article = "UN Chief says there is no military solution in Syria"
>>> inputs = tokenizer(article, return_tensors="pt")
>>> translated_tokens = model.generate(
... **inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids("fra_Latn"), max_length=30
... )
>>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
Le chef de l'ONU dit qu'il n'y a pas de solution militaire en Syrie
```
|
312_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#generating-from-any-other-language-than-english
|
.md
|
English (`eng_Latn`) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language,
you should specify the BCP-47 code in the `src_lang` keyword argument of the tokenizer initialization.
See example below for a translation from romanian to german:
```py
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
312_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#generating-from-any-other-language-than-english
|
.md
|
>>> tokenizer = AutoTokenizer.from_pretrained(
... "facebook/nllb-200-distilled-600M", token=True, src_lang="ron_Latn"
... )
>>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", token=True)
>>> article = "Şeful ONU spune că nu există o soluţie militară în Siria"
>>> inputs = tokenizer(article, return_tensors="pt")
|
312_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#generating-from-any-other-language-than-english
|
.md
|
>>> article = "Şeful ONU spune că nu există o soluţie militară în Siria"
>>> inputs = tokenizer(article, return_tensors="pt")
>>> translated_tokens = model.generate(
... **inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids("deu_Latn"), max_length=30
... )
>>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
UN-Chef sagt, es gibt keine militärische Lösung in Syrien
```
|
312_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#resources
|
.md
|
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization)
|
312_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizer
|
.md
|
Construct an NLLB tokenizer.
Adapted from [`RobertaTokenizer`] and [`XLNetTokenizer`]. Based on
[SentencePiece](https://github.com/google/sentencepiece).
The tokenization method is `<tokens> <eos> <language code>` for source language documents, and `<language code>
<tokens> <eos>` for target language documents.
Examples:
```python
>>> from transformers import NllbTokenizer
|
312_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizer
|
.md
|
>>> tokenizer = NllbTokenizer.from_pretrained(
... "facebook/nllb-200-distilled-600M", src_lang="eng_Latn", tgt_lang="fra_Latn"
... )
>>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_french = "Le chef de l'ONU affirme qu'il n'y a pas de solution militaire en Syrie."
>>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_french, return_tensors="pt")
```
Args:
vocab_file (`str`):
Path to the vocabulary file.
|
312_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizer
|
.md
|
```
Args:
vocab_file (`str`):
Path to the vocabulary file.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
|
312_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizer
|
.md
|
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
312_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizer
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
312_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizer
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
312_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizer
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenizer_file (`str`, *optional*):
The path to a tokenizer file to use instead of the vocab file.
src_lang (`str`, *optional*):
|
312_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizer
|
.md
|
The path to a tokenizer file to use instead of the vocab file.
src_lang (`str`, *optional*):
The language to use as source language for translation.
tgt_lang (`str`, *optional*):
The language to use as target language for translation.
sp_model_kwargs (`Dict[str, str]`):
Additional keyword arguments to pass to the model initialization.
Methods: build_inputs_with_special_tokens
|
312_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizerfast
|
.md
|
Construct a "fast" NLLB tokenizer (backed by HuggingFace's *tokenizers* library). Based on
[BPE](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models).
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
The tokenization method is `<tokens> <eos> <language code>` for source language documents, and `<language code>
|
312_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizerfast
|
.md
|
The tokenization method is `<tokens> <eos> <language code>` for source language documents, and `<language code>
<tokens> <eos>` for target language documents.
Examples:
```python
>>> from transformers import NllbTokenizerFast
|
312_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizerfast
|
.md
|
>>> tokenizer = NllbTokenizerFast.from_pretrained(
... "facebook/nllb-200-distilled-600M", src_lang="eng_Latn", tgt_lang="fra_Latn"
... )
>>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_french = "Le chef de l'ONU affirme qu'il n'y a pas de solution militaire en Syrie."
>>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_french, return_tensors="pt")
```
Args:
vocab_file (`str`):
Path to the vocabulary file.
|
312_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizerfast
|
.md
|
```
Args:
vocab_file (`str`):
Path to the vocabulary file.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
|
312_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizerfast
|
.md
|
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
312_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizerfast
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
312_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizerfast
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
312_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizerfast
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenizer_file (`str`, *optional*):
The path to a tokenizer file to use instead of the vocab file.
src_lang (`str`, *optional*):
|
312_7_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#nllbtokenizerfast
|
.md
|
The path to a tokenizer file to use instead of the vocab file.
src_lang (`str`, *optional*):
The language to use as source language for translation.
tgt_lang (`str`, *optional*):
The language to use as target language for translation.
|
312_7_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#using-flash-attention-2
|
.md
|
Flash Attention 2 is a faster, optimized version of the attention scores computation which relies on `cuda` kernels.
|
312_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#installation
|
.md
|
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features).
Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2:
```bash
pip install -U flash-attn --no-build-isolation
```
|
312_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#usage
|
.md
|
To load a model using Flash Attention 2, we can pass the argument `attn_implementation="flash_attention_2"` to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). You can use either `torch.float16` or `torch.bfloat16` precision.
```python
>>> import torch
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
312_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#usage
|
.md
|
>>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to("cuda").eval()
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
>>> article = "Şeful ONU spune că nu există o soluţie militară în Siria"
>>> inputs = tokenizer(article, return_tensors="pt").to("cuda")
|
312_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#usage
|
.md
|
>>> translated_tokens = model.generate(
... **inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids("deu_Latn"), max_length=30
... )
>>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"UN-Chef sagt, es gibt keine militärische Lösung in Syrien"
```
|
312_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#expected-speedups
|
.md
|
Below is an expected speedup diagram that compares pure inference time between the native implementation and the Flash Attention 2.
<div style="text-align: center">
<img src="https://huggingface.co/datasets/visheratin/documentation-images/resolve/main/nllb-speedup.webp">
</div>
|
312_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#using-scaled-dot-product-attention-sdpa
|
.md
|
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.
|
312_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#using-scaled-dot-product-attention-sdpa
|
.md
|
page for more information.
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
```python
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", torch_dtype=torch.float16, attn_implementation="sdpa")
...
```
|
312_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb/#using-scaled-dot-product-attention-sdpa
|
.md
|
...
```
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
|
312_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
313_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
313_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#overview
|
.md
|
The DeiT model was proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre
Sablayrolles, Hervé Jégou. The [Vision Transformer (ViT)](vit) introduced in [Dosovitskiy et al., 2020](https://arxiv.org/abs/2010.11929) has shown that one can match or even outperform existing convolutional neural
|
313_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#overview
|
.md
|
networks using a Transformer encoder (BERT-like). However, the ViT models introduced in that paper required training on
expensive infrastructure for multiple weeks, using external data. DeiT (data-efficient image transformers) are more
efficiently trained transformers for image classification, requiring far less data and far less computing resources
compared to the original ViT models.
The abstract from the paper is the following:
|
313_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#overview
|
.md
|
compared to the original ViT models.
The abstract from the paper is the following:
*Recently, neural networks purely based on attention were shown to address image understanding tasks such as image
classification. However, these visual transformers are pre-trained with hundreds of millions of images using an
expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free
|
313_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#overview
|
.md
|
expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free
transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision
transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external
data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation
|
313_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#overview
|
.md
|
data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation
token ensuring that the student learns from the teacher through attention. We show the interest of this token-based
distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets
for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and
models.*
|
313_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#overview
|
.md
|
for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and
models.*
This model was contributed by [nielsr](https://huggingface.co/nielsr). The TensorFlow version of this model was added by [amyeroberts](https://huggingface.co/amyeroberts).
|
313_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#usage-tips
|
.md
|
- Compared to ViT, DeiT models use a so-called distillation token to effectively learn from a teacher (which, in the
DeiT paper, is a ResNet like-model). The distillation token is learned through backpropagation, by interacting with
the class ([CLS]) and patch tokens through the self-attention layers.
- There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top
|
313_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#usage-tips
|
.md
|
- There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top
of the final hidden state of the class token and not using the distillation signal, or (2) by placing both a
prediction head on top of the class token and on top of the distillation token. In that case, the [CLS] prediction
head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the
|
313_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#usage-tips
|
.md
|
head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the
distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the
distillation head and the label predicted by the teacher). At inference time, one takes the average prediction
between both heads as final prediction. (2) is also called "fine-tuning with distillation", because one relies on a
|
313_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#usage-tips
|
.md
|
between both heads as final prediction. (2) is also called "fine-tuning with distillation", because one relies on a
teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to
[`DeiTForImageClassification`] and (2) corresponds to
[`DeiTForImageClassificationWithTeacher`].
- Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is
|
313_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#usage-tips
|
.md
|
- Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is
trained using KL divergence to match the softmax output of the teacher), but hard distillation gave the best results.
- All released checkpoints were pre-trained and fine-tuned on ImageNet-1k only. No external data was used. This is in
contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
pre-training.
|
313_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#usage-tips
|
.md
|
contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
pre-training.
- The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into
[`ViTModel`] or [`ViTForImageClassification`]. Techniques like data
augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset
(while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes):
|
313_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#usage-tips
|
.md
|
(while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes):
*facebook/deit-tiny-patch16-224*, *facebook/deit-small-patch16-224*, *facebook/deit-base-patch16-224* and
*facebook/deit-base-patch16-384*. Note that one should use [`DeiTImageProcessor`] in order to
prepare images for the model.
|
313_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#using-scaled-dot-product-attention-sdpa
|
.md
|
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.
|
313_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#using-scaled-dot-product-attention-sdpa
|
.md
|
page for more information.
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
```
from transformers import DeiTForImageClassification
model = DeiTForImageClassification.from_pretrained("facebook/deit-base-distilled-patch16-224", attn_implementation="sdpa", torch_dtype=torch.float16)
...
```
|
313_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#using-scaled-dot-product-attention-sdpa
|
.md
|
...
```
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `facebook/deit-base-distilled-patch16-224` model, we saw the following speedups during inference.
| Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) |
|
313_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#using-scaled-dot-product-attention-sdpa
|
.md
|
|--------------|-------------------------------------------|-------------------------------------------|------------------------------|
| 1 | 8 | 6 | 1.33 |
| 2 | 9 | 6 | 1.5 |
|
313_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#using-scaled-dot-product-attention-sdpa
|
.md
|
| 4 | 9 | 6 | 1.5 |
| 8 | 8 | 6 | 1.33 |
|
313_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeiT.
<PipelineTag pipeline="image-classification"/>
- [`DeiTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
313_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#resources
|
.md
|
- See also: [Image classification task guide](../tasks/image_classification)
Besides that:
- [`DeiTForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining).
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
313_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitconfig
|
.md
|
This is the configuration class to store the configuration of a [`DeiTModel`]. It is used to instantiate an DeiT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the DeiT
[facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224)
architecture.
|
313_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitconfig
|
.md
|
[facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
|
313_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitconfig
|
.md
|
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
313_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitconfig
|
.md
|
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
313_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
|
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
|
313_5_4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.