source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
.md
focal_windows (`list(int)`, *optional*, defaults to `[3, 3, 3, 3]`): Focal window size in each layer of the respective stages in the encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. mlp_ratio (`float`, *optional*, defaults to 4.0): Ratio of MLP hidden dimensionality to embedding dimensionality.
252_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
.md
mlp_ratio (`float`, *optional*, defaults to 4.0): Ratio of MLP hidden dimensionality to embedding dimensionality. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings and encoder. drop_path_rate (`float`, *optional*, defaults to 0.1): Stochastic depth rate. use_layerscale (`bool`, *optional*, defaults to `False`): Whether to use layer scale in the encoder. layerscale_value (`float`, *optional*, defaults to 0.0001):
252_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
.md
Whether to use layer scale in the encoder. layerscale_value (`float`, *optional*, defaults to 0.0001): The initial value of the layer scale. use_post_layernorm (`bool`, *optional*, defaults to `False`): Whether to use post layer normalization in the encoder. use_post_layernorm_in_modulation (`bool`, *optional*, defaults to `False`): Whether to use post layer normalization in the modulation layer. normalize_modulator (`bool`, *optional*, defaults to `False`): Whether to normalize the modulator.
252_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
.md
normalize_modulator (`bool`, *optional*, defaults to `False`): Whether to normalize the modulator. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. encoder_stride (`int`, *optional*, defaults to 32): Factor to increase the spatial resolution by in the decoder head for masked image modeling.
252_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
.md
Factor to increase the spatial resolution by in the decoder head for masked image modeling. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute.
252_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
.md
same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. Example: ```python
252_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
.md
same order as defined in the `stage_names` attribute. Example: ```python >>> from transformers import FocalNetConfig, FocalNetModel
252_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
.md
>>> # Initializing a FocalNet microsoft/focalnet-tiny style configuration >>> configuration = FocalNetConfig() >>> # Initializing a model (with random weights) from the microsoft/focalnet-tiny style configuration >>> model = FocalNetModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
252_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetmodel
.md
The bare FocalNet Model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FocalNetConfig`]): Model configuration class with all the parameters of the model.
252_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetmodel
.md
behavior. Parameters: config ([`FocalNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
252_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetformaskedimagemodeling
.md
FocalNet Model with a decoder on top for masked image modeling. This follows the same implementation as in [SimMIM](https://arxiv.org/abs/2111.09886). <Tip> Note that we provide a script to pre-train this model on custom data in our [examples directory](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
252_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetformaskedimagemodeling
.md
</Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FocalNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
252_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetformaskedimagemodeling
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
252_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetforimageclassification
.md
FocalNet Model with an image classification head on top (a linear layer on top of the pooled output) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FocalNetConfig`]): Model configuration class with all the parameters of the model.
252_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetforimageclassification
.md
behavior. Parameters: config ([`FocalNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
252_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
253_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
253_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#overview
.md
ERNIE is a series of powerful models proposed by baidu, especially in Chinese tasks, including [ERNIE1.0](https://arxiv.org/abs/1904.09223), [ERNIE2.0](https://ojs.aaai.org/index.php/AAAI/article/view/6428), [ERNIE3.0](https://arxiv.org/abs/2107.02137), [ERNIE-Gram](https://arxiv.org/abs/2010.12148), [ERNIE-health](https://arxiv.org/abs/2110.07244), etc.
253_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#overview
.md
These models are contributed by [nghuyong](https://huggingface.co/nghuyong) and the official code can be found in [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) (in PaddlePaddle).
253_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#usage-example
.md
Take `ernie-1.0-base-zh` as an example: ```Python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh") model = AutoModel.from_pretrained("nghuyong/ernie-1.0-base-zh") ```
253_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#model-checkpoints
.md
| Model Name | Language | Description | |:-------------------:|:--------:|:-------------------------------:| | ernie-1.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-2.0-base-en | English | Layer:12, Heads:12, Hidden:768 | | ernie-2.0-large-en | English | Layer:24, Heads:16, Hidden:1024 | | ernie-3.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-3.0-medium-zh | Chinese | Layer:6, Heads:12, Hidden:768 |
253_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#model-checkpoints
.md
| ernie-3.0-medium-zh | Chinese | Layer:6, Heads:12, Hidden:768 | | ernie-3.0-mini-zh | Chinese | Layer:6, Heads:12, Hidden:384 | | ernie-3.0-micro-zh | Chinese | Layer:4, Heads:12, Hidden:384 | | ernie-3.0-nano-zh | Chinese | Layer:4, Heads:12, Hidden:312 | | ernie-health-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-gram-zh | Chinese | Layer:12, Heads:12, Hidden:768 |
253_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#model-checkpoints
.md
| ernie-gram-zh | Chinese | Layer:12, Heads:12, Hidden:768 | You can find all the supported models from huggingface's model hub: [huggingface.co/nghuyong](https://huggingface.co/nghuyong), and model details from paddle's official repo: [PaddleNLP](https://paddlenlp.readthedocs.io/zh/latest/model_zoo/transformers/ERNIE/contents.html) and [ERNIE](https://github.com/PaddlePaddle/ERNIE/blob/repro).
253_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
253_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
This is the configuration class to store the configuration of a [`ErnieModel`] or a [`TFErnieModel`]. It is used to instantiate a ERNIE model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ERNIE [nghuyong/ernie-3.0-base-zh](https://huggingface.co/nghuyong/ernie-3.0-base-zh) architecture.
253_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
[nghuyong/ernie-3.0-base-zh](https://huggingface.co/nghuyong/ernie-3.0-base-zh) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the ERNIE model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`ErnieModel`] or [`TFErnieModel`].
253_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
`inputs_ids` passed when calling [`ErnieModel`] or [`TFErnieModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072):
253_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
253_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
`"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large
253_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`ErnieModel`] or [`TFErnieModel`]. task_type_vocab_size (`int`, *optional*, defaults to 3): The vocabulary size of the `task_type_ids` for ERNIE2.0/ERNIE3.0 model use_task_id (`bool`, *optional*, defaults to `False`):
253_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
The vocabulary size of the `task_type_ids` for ERNIE2.0/ERNIE3.0 model use_task_id (`bool`, *optional*, defaults to `False`): Whether or not the model support `task_type_ids` initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 0): Padding token id.
253_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
The epsilon used by the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 0): Padding token id. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
253_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`.
253_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
relevant if `config.is_decoder=True`. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. Examples: ```python >>> from transformers import ErnieConfig, ErnieModel
253_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieconfig
.md
>>> # Initializing a ERNIE nghuyong/ernie-3.0-base-zh style configuration >>> configuration = ErnieConfig() >>> # Initializing a model (with random weights) from the nghuyong/ernie-3.0-base-zh style configuration >>> model = ErnieModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` Methods: all
253_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernie-specific-outputs
.md
models.ernie.modeling_ernie.ErnieForPreTrainingOutput Output type of [`ErnieForPreTraining`]. Args: loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
253_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernie-specific-outputs
.md
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). seq_relationship_logits (`torch.FloatTensor` of shape `(batch_size, 2)`): Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
253_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernie-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
253_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernie-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
253_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniemodel
.md
The bare Ernie Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
253_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniemodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ErnieConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
253_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniemodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in [Attention is
253_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniemodel
.md
cross-attention is added between the self-attention layers, following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
253_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniemodel
.md
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. Methods: forward
253_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforpretraining
.md
Ernie Model with two heads on top as done during the pretraining: a `masked language modeling` head and a `next sentence prediction (classification)` head. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
253_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforpretraining
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ErnieConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
253_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforpretraining
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
253_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforcausallm
.md
Ernie Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
253_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforcausallm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ErnieConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
253_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforcausallm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
253_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieformaskedlm
.md
Ernie Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
253_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieformaskedlm
.md
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ErnieConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
253_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniefornextsentenceprediction
.md
Ernie Model with a `next sentence prediction (classification)` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
253_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniefornextsentenceprediction
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ErnieConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
253_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniefornextsentenceprediction
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
253_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforsequenceclassification
.md
Ernie Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
253_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ErnieConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
253_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforsequenceclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
253_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieformultiplechoice
.md
Ernie Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
253_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieformultiplechoice
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ErnieConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
253_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieformultiplechoice
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
253_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniefortokenclassification
.md
Ernie Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
253_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniefortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ErnieConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
253_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#erniefortokenclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
253_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforquestionanswering
.md
Ernie Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
253_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ErnieConfig`]): Model configuration class with all the parameters of the model.
253_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie.md
https://huggingface.co/docs/transformers/en/model_doc/ernie/#ernieforquestionanswering
.md
and behavior. Parameters: config ([`ErnieConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
253_15_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
254_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
254_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#overview
.md
The CLIP model was proposed in [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be
254_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#overview
.md
(Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. The abstract from the paper is the following: *State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This
254_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#overview
.md
*State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes
254_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#overview
.md
much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study
254_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#overview
.md
learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need
254_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#overview
.md
model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at this https URL.*
254_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#overview
.md
model weights at this https URL.* This model was contributed by [valhalla](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/openai/CLIP).
254_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#usage-tips-and-example
.md
CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score.
254_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#usage-tips-and-example
.md
product between the projected image and text features is then used as a similar score. To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder.
254_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#usage-tips-and-example
.md
also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The [`CLIPImageProcessor`] can be used to resize (or rescale) and normalize images for the model. The [`CLIPTokenizer`] is used to encode the text. The [`CLIPProcessor`] wraps [`CLIPImageProcessor`] and [`CLIPTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using
254_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#usage-tips-and-example
.md
encode the text and prepare the images. The following example shows how to get the image-text similarity scores using [`CLIPProcessor`] and [`CLIPModel`]. ```python >>> from PIL import Image >>> import requests
254_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#usage-tips-and-example
.md
>>> from transformers import CLIPProcessor, CLIPModel >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
254_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#usage-tips-and-example
.md
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ```
254_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#combining-clip-and-flash-attention-2
.md
First, make sure to install the latest version of Flash Attention 2. ```bash pip install -U flash-attn --no-build-isolation ``` Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16`) <Tip warning={true}>
254_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#combining-clip-and-flash-attention-2
.md
<Tip warning={true}> For small batch sizes, you might notice a slowdown in your model when using flash attention. Refer to the section [Expected speedups with Flash Attention and SDPA](#Expected-speedups-with-Flash-Attention-and-SDPA) below and select an appropriate attention implementation. </Tip> To load and run a model using Flash Attention 2, refer to the snippet below: ```python >>> import torch >>> import requests >>> from PIL import Image
254_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#combining-clip-and-flash-attention-2
.md
>>> from transformers import CLIPProcessor, CLIPModel >>> device = "cuda" >>> torch_dtype = torch.float16 >>> model = CLIPModel.from_pretrained( ... "openai/clip-vit-base-patch32", ... attn_implementation="flash_attention_2", ... device_map=device, ... torch_dtype=torch_dtype, ... ) >>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw)
254_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#combining-clip-and-flash-attention-2
.md
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) >>> inputs.to(device) >>> with torch.no_grad(): ... with torch.autocast(device): ... outputs = model(**inputs)
254_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#combining-clip-and-flash-attention-2
.md
>>> with torch.no_grad(): ... with torch.autocast(device): ... outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities >>> print(probs) tensor([[0.9946, 0.0052]], device='cuda:0', dtype=torch.float16) ```
254_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
254_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ```python from transformers import CLIPModel
254_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#using-scaled-dot-product-attention-sdpa
.md
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32", torch_dtype=torch.float16, attn_implementation="sdpa") ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
254_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#expected-speedups-with-flash-attention-and-sdpa
.md
On a local benchmark (NVIDIA A10G, PyTorch 2.3.1+cu121) with `float16`, we saw the following speedups during inference for `"openai/clip-vit-large-patch14"` checkpoint ([code](https://gist.github.com/qubvel/ac691a54e54f9fae8144275f866a7ff8)):
254_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextmodel
.md
| Num text labels | Eager (s/iter) | FA2 (s/iter) | FA2 speedup | SDPA (s/iter) | SDPA speedup | |------------------:|-----------------:|---------------:|--------------:|----------------:|---------------:| | 4 | 0.009 | 0.012 | 0.737 | 0.007 | 1.269 | | 16 | 0.009 | 0.014 | 0.659 | 0.008 | 1.187 |
254_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextmodel
.md
| 16 | 0.009 | 0.014 | 0.659 | 0.008 | 1.187 | | 32 | 0.018 | 0.021 | 0.862 | 0.016 | 1.142 | | 64 | 0.034 | 0.034 | 1.001 | 0.03 | 1.163 | | 128 | 0.063 | 0.058 | 1.09 | 0.054 | 1.174 |
254_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextmodel
.md
| 128 | 0.063 | 0.058 | 1.09 | 0.054 | 1.174 | ![clip_text_model_viz_3](https://github.com/user-attachments/assets/e9826b43-4e66-4f4c-952b-af4d90bd38eb)
254_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionmodel
.md
| Image batch size | Eager (s/iter) | FA2 (s/iter) | FA2 speedup | SDPA (s/iter) | SDPA speedup | |-------------------:|-----------------:|---------------:|--------------:|----------------:|---------------:| | 1 | 0.016 | 0.013 | 1.247 | 0.012 | 1.318 | | 4 | 0.025 | 0.021 | 1.198 | 0.021 | 1.202 |
254_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionmodel
.md
| 4 | 0.025 | 0.021 | 1.198 | 0.021 | 1.202 | | 16 | 0.093 | 0.075 | 1.234 | 0.075 | 1.24 | | 32 | 0.181 | 0.147 | 1.237 | 0.146 | 1.241 | ![clip_image_model_viz_3](https://github.com/user-attachments/assets/50a36206-e3b9-4adc-ac8e-926b8b071d63)
254_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipmodel
.md
| Image batch size | Num text labels | Eager (s/iter) | FA2 (s/iter) | FA2 speedup | SDPA (s/iter) | SDPA speedup | |-------------------:|------------------:|-----------------:|---------------:|--------------:|----------------:|---------------:| | 1 | 4 | 0.025 | 0.026 | 0.954 | 0.02 | 1.217 |
254_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipmodel
.md
| 1 | 16 | 0.026 | 0.028 | 0.918 | 0.02 | 1.287 | | 1 | 64 | 0.042 | 0.046 | 0.906 | 0.036 | 1.167 | | 4 | 4 | 0.028 | 0.033 | 0.849 | 0.024 | 1.189 |
254_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipmodel
.md
| 4 | 16 | 0.034 | 0.035 | 0.955 | 0.029 | 1.169 | | 4 | 64 | 0.059 | 0.055 | 1.072 | 0.05 | 1.179 | | 16 | 4 | 0.096 | 0.088 | 1.091 | 0.078 | 1.234 |
254_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipmodel
.md
| 16 | 16 | 0.102 | 0.09 | 1.129 | 0.083 | 1.224 | | 16 | 64 | 0.127 | 0.11 | 1.157 | 0.105 | 1.218 | | 32 | 4 | 0.185 | 0.159 | 1.157 | 0.149 | 1.238 |
254_8_3