source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraformultiplechoice
.md
No docstring available for FlaxElectraForMultipleChoice Methods: __call__
353_29_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectrafortokenclassification
.md
No docstring available for FlaxElectraForTokenClassification Methods: __call__
353_30_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraforquestionanswering
.md
No docstring available for FlaxElectraForQuestionAnswering Methods: __call__ </jax> </frameworkcontent>
353_31_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
354_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
354_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#overview
.md
The RoBERTa-PreLayerNorm model was proposed in [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. It is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/). The abstract from the paper is the following:
354_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#overview
.md
The abstract from the paper is the following: *fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs.* This model was contributed by [andreasmaden](https://huggingface.co/andreasmadsen).
354_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#overview
.md
This model was contributed by [andreasmaden](https://huggingface.co/andreasmadsen). The original code can be found [here](https://github.com/princeton-nlp/DinkyTrain).
354_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#usage-tips
.md
- The implementation is the same as [Roberta](roberta) except instead of using _Add and Norm_ it does _Norm and Add_. _Add_ and _Norm_ refers to the Addition and LayerNormalization as described in [Attention Is All You Need](https://arxiv.org/abs/1706.03762). - This is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/).
354_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
354_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
This is the configuration class to store the configuration of a [`RobertaPreLayerNormModel`] or a [`TFRobertaPreLayerNormModel`]. It is used to instantiate a RoBERTa-PreLayerNorm model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RoBERTa-PreLayerNorm [andreasmadsen/efficient_mlm_m0.40](https://huggingface.co/andreasmadsen/efficient_mlm_m0.40) architecture.
354_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
[andreasmadsen/efficient_mlm_m0.40](https://huggingface.co/andreasmadsen/efficient_mlm_m0.40) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50265): Vocabulary size of the RoBERTa-PreLayerNorm model. Defines the number of different tokens that can be represented by the
354_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
Vocabulary size of the RoBERTa-PreLayerNorm model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`RobertaPreLayerNormModel`] or [`TFRobertaPreLayerNormModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
354_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
354_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities.
354_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`RobertaPreLayerNormModel`] or [`TFRobertaPreLayerNormModel`].
354_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
The vocabulary size of the `token_type_ids` passed when calling [`RobertaPreLayerNormModel`] or [`TFRobertaPreLayerNormModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
354_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
354_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`):
354_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. Examples: ```python >>> from transformers import RobertaPreLayerNormConfig, RobertaPreLayerNormModel
354_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
.md
>>> # Initializing a RoBERTa-PreLayerNorm configuration >>> configuration = RobertaPreLayerNormConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = RobertaPreLayerNormModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
354_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormmodel
.md
The bare RoBERTa-PreLayerNorm Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
354_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
354_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormmodel
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in *Attention is
354_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormmodel
.md
cross-attention is added between the self-attention layers, following the architecture described in *Attention is all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
354_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormmodel
.md
to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. .. _*Attention is all you need*: https://arxiv.org/abs/1706.03762 Methods: forward
354_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforcausallm
.md
RoBERTa-PreLayerNorm Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
354_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforcausallm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
354_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforcausallm
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
354_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormformaskedlm
.md
RoBERTa-PreLayerNorm Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
354_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormformaskedlm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
354_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormformaskedlm
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
354_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforsequenceclassification
.md
RoBERTa-PreLayerNorm Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
354_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
354_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforsequenceclassification
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
354_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormformultiplechoice
.md
RobertaPreLayerNorm Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
354_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormformultiplechoice
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the
354_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormformultiplechoice
.md
and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
354_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormfortokenclassification
.md
RobertaPreLayerNorm Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
354_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormfortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
354_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormfortokenclassification
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
354_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforquestionanswering
.md
RobertaPreLayerNorm Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
354_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the
354_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforquestionanswering
.md
and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
354_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormmodel
.md
No docstring available for TFRobertaPreLayerNormModel Methods: call
354_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormforcausallm
.md
No docstring available for TFRobertaPreLayerNormForCausalLM Methods: call
354_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormformaskedlm
.md
No docstring available for TFRobertaPreLayerNormForMaskedLM Methods: call
354_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormforsequenceclassification
.md
No docstring available for TFRobertaPreLayerNormForSequenceClassification Methods: call
354_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormformultiplechoice
.md
No docstring available for TFRobertaPreLayerNormForMultipleChoice Methods: call
354_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormfortokenclassification
.md
No docstring available for TFRobertaPreLayerNormForTokenClassification Methods: call
354_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormforquestionanswering
.md
No docstring available for TFRobertaPreLayerNormForQuestionAnswering Methods: call </tf> <jax>
354_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormmodel
.md
No docstring available for FlaxRobertaPreLayerNormModel Methods: __call__
354_19_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormforcausallm
.md
No docstring available for FlaxRobertaPreLayerNormForCausalLM Methods: __call__
354_20_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormformaskedlm
.md
No docstring available for FlaxRobertaPreLayerNormForMaskedLM Methods: __call__
354_21_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormforsequenceclassification
.md
No docstring available for FlaxRobertaPreLayerNormForSequenceClassification Methods: __call__
354_22_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormformultiplechoice
.md
No docstring available for FlaxRobertaPreLayerNormForMultipleChoice Methods: __call__
354_23_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormfortokenclassification
.md
No docstring available for FlaxRobertaPreLayerNormForTokenClassification Methods: __call__
354_24_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormforquestionanswering
.md
No docstring available for FlaxRobertaPreLayerNormForQuestionAnswering Methods: __call__ </jax> </frameworkcontent>
354_25_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
355_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
355_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#overview
.md
The MobileViTV2 model was proposed in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari. MobileViTV2 is the second version of MobileViT, constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention. The abstract from the paper is the following:
355_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#overview
.md
*Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires O(k2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly
355_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#overview
.md
which requires O(k2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. O(k). A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for
355_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#overview
.md
of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTV2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTV2 achieves a top-1 accuracy of 75.6% on the ImageNet dataset, outperforming MobileViT by about 1% while running 3.2× faster on a mobile device.*
355_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#overview
.md
This model was contributed by [shehan97](https://huggingface.co/shehan97). The original code can be found [here](https://github.com/apple/ml-cvnets).
355_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#usage-tips
.md
- MobileViTV2 is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. - One can use [`MobileViTImageProcessor`] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB).
355_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#usage-tips
.md
- The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). - The segmentation model uses a [DeepLabV3](https://arxiv.org/abs/1706.05587) head. The available semantic segmentation checkpoints are pre-trained on [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/).
355_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2config
.md
This is the configuration class to store the configuration of a [`MobileViTV2Model`]. It is used to instantiate a MobileViTV2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MobileViTV2 [apple/mobilevitv2-1.0](https://huggingface.co/apple/mobilevitv2-1.0) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
355_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2config
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_channels (`int`, *optional*, defaults to 3): The number of input channels. image_size (`int`, *optional*, defaults to 256): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 2): The size (resolution) of each patch. expand_ratio (`float`, *optional*, defaults to 2.0):
355_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2config
.md
The size (resolution) of each patch. expand_ratio (`float`, *optional*, defaults to 2.0): Expansion factor for the MobileNetv2 layers. hidden_act (`str` or `function`, *optional*, defaults to `"swish"`): The non-linear activation function (function or string) in the Transformer encoder and convolution layers. conv_kernel_size (`int`, *optional*, defaults to 3): The size of the convolutional kernel in the MobileViTV2 layer. output_stride (`int`, *optional*, defaults to 32):
355_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2config
.md
The size of the convolutional kernel in the MobileViTV2 layer. output_stride (`int`, *optional*, defaults to 32): The ratio of the spatial resolution of the output to the resolution of the input image. classifier_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for attached classifiers. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
355_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2config
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. aspp_out_channels (`int`, *optional*, defaults to 512): Number of output channels used in the ASPP layer for semantic segmentation. atrous_rates (`List[int]`, *optional*, defaults to `[6, 12, 18]`): Dilation (atrous) factors used in the ASPP layer for semantic segmentation.
355_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2config
.md
Dilation (atrous) factors used in the ASPP layer for semantic segmentation. aspp_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the ASPP layer for semantic segmentation. semantic_loss_ignore_index (`int`, *optional*, defaults to 255): The index that is ignored by the loss function of the semantic segmentation model. n_attn_blocks (`List[int]`, *optional*, defaults to `[2, 4, 3]`): The number of attention blocks in each MobileViTV2Layer
355_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2config
.md
n_attn_blocks (`List[int]`, *optional*, defaults to `[2, 4, 3]`): The number of attention blocks in each MobileViTV2Layer base_attn_unit_dims (`List[int]`, *optional*, defaults to `[128, 192, 256]`): The base multiplier for dimensions of attention blocks in each MobileViTV2Layer width_multiplier (`float`, *optional*, defaults to 1.0): The width multiplier for MobileViTV2. ffn_multiplier (`int`, *optional*, defaults to 2): The FFN multiplier for MobileViTV2.
355_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2config
.md
The width multiplier for MobileViTV2. ffn_multiplier (`int`, *optional*, defaults to 2): The FFN multiplier for MobileViTV2. attn_dropout (`float`, *optional*, defaults to 0.0): The dropout in the attention layer. ffn_dropout (`float`, *optional*, defaults to 0.0): The dropout between FFN layers. Example: ```python >>> from transformers import MobileViTV2Config, MobileViTV2Model
355_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2config
.md
>>> # Initializing a mobilevitv2-small style configuration >>> configuration = MobileViTV2Config() >>> # Initializing a model from the mobilevitv2-small style configuration >>> model = MobileViTV2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
355_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2model
.md
The bare MobileViTV2 model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileViTV2Config`]): Model configuration class with all the parameters of the model.
355_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2model
.md
behavior. Parameters: config ([`MobileViTV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
355_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2forimageclassification
.md
MobileViTV2 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileViTV2Config`]): Model configuration class with all the parameters of the model.
355_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2forimageclassification
.md
behavior. Parameters: config ([`MobileViTV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
355_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2forsemanticsegmentation
.md
MobileViTV2 model with a semantic segmentation head on top, e.g. for Pascal VOC. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileViTV2Config`]): Model configuration class with all the parameters of the model.
355_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2forsemanticsegmentation
.md
behavior. Parameters: config ([`MobileViTV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
355_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
356_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
356_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#overview
.md
The Grounding DINO model was proposed in [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang. Grounding DINO extends a closed-set object detection model with a text encoder, enabling open-set object detection. The model achieves remarkable results, such as 52.5 AP on COCO zero-shot.
356_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#overview
.md
The abstract from the paper is the following:
356_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#overview
.md
*In this paper, we present an open-set object detector, called Grounding DINO, by marrying Transformer-based detector DINO with grounded pre-training, which can detect arbitrary objects with human inputs such as category names or referring expressions. The key solution of open-set object detection is introducing language to a closed-set detector for open-set concept generalization. To effectively fuse language and vision modalities, we conceptually divide a closed-set detector into three phases and propose
356_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#overview
.md
To effectively fuse language and vision modalities, we conceptually divide a closed-set detector into three phases and propose a tight fusion solution, which includes a feature enhancer, a language-guided query selection, and a cross-modality decoder for cross-modality fusion. While previous works mainly evaluate open-set object detection on novel categories, we propose to also perform evaluations on referring expression comprehension for objects specified with attributes. Grounding DINO performs
356_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#overview
.md
also perform evaluations on referring expression comprehension for objects specified with attributes. Grounding DINO performs remarkably well on all three settings, including benchmarks on COCO, LVIS, ODinW, and RefCOCO/+/g. Grounding DINO achieves a 52.5 AP on the COCO detection zero-shot transfer benchmark, i.e., without any training data from COCO. It sets a new record on the ODinW zero-shot benchmark with a mean 26.1 AP.*
356_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/grouding_dino_architecture.png" alt="drawing" width="600"/> <small> Grounding DINO overview. Taken from the <a href="https://arxiv.org/abs/2303.05499">original paper</a>. </small> This model was contributed by [EduardoPacheco](https://huggingface.co/EduardoPacheco) and [nielsr](https://huggingface.co/nielsr).
356_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#overview
.md
The original code can be found [here](https://github.com/IDEA-Research/GroundingDINO).
356_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#usage-tips
.md
- One can use [`GroundingDinoProcessor`] to prepare image-text pairs for the model. - To separate classes in the text use a period e.g. "a cat. a dog." - When using multiple classes (e.g. `"a cat. a dog."`), use `post_process_grounded_object_detection` from [`GroundingDinoProcessor`] to post process outputs. Since, the labels returned from `post_process_object_detection` represent the indices from the model dimension where prob > threshold. Here's how to use the model for zero-shot object detection:
356_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#usage-tips
.md
Here's how to use the model for zero-shot object detection: ```python >>> import requests
356_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#usage-tips
.md
>>> import torch >>> from PIL import Image >>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection >>> model_id = "IDEA-Research/grounding-dino-tiny" >>> device = "cuda" >>> processor = AutoProcessor.from_pretrained(model_id) >>> model = AutoModelForZeroShotObjectDetection.from_pretrained(model_id).to(device)
356_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#usage-tips
.md
>>> image_url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(image_url, stream=True).raw) >>> # Check for cats and remote controls >>> text_labels = [["a cat", "a remote control"]] >>> inputs = processor(images=image, text=text_labels, return_tensors="pt").to(device) >>> with torch.no_grad(): ... outputs = model(**inputs)
356_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#usage-tips
.md
>>> results = processor.post_process_grounded_object_detection( ... outputs, ... threshold=0.4, ... text_threshold=0.3, ... target_sizes=[(image.height, image.width)] ... ) >>> # Retrieve the first image result >>> result = results[0] >>> for box, score, text_label in zip(result["boxes"], result["scores"], result["text_labels"]): ... box = [round(x, 2) for x in box.tolist()] ... print(f"Detected {text_label} with confidence {round(score.item(), 3)} at location {box}")
356_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#usage-tips
.md
... print(f"Detected {text_label} with confidence {round(score.item(), 3)} at location {box}") Detected a cat with confidence 0.479 at location [344.7, 23.11, 637.18, 374.28] Detected a cat with confidence 0.438 at location [12.27, 51.91, 316.86, 472.44] Detected a remote control with confidence 0.478 at location [38.57, 70.0, 176.78, 118.18] ```
356_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#grounded-sam
.md
One can combine Grounding DINO with the [Segment Anything](sam) model for text-based mask generation as introduced in [Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks](https://arxiv.org/abs/2401.14159). You can refer to this [demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Grounding%20DINO/GroundingDINO_with_Segment_Anything.ipynb) 🌍 for details.
356_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#grounded-sam
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/grounded_sam.png" alt="drawing" width="900"/> <small> Grounded SAM overview. Taken from the <a href="https://github.com/IDEA-Research/Grounded-Segment-Anything">original repository</a>. </small>
356_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Grounding DINO. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
356_4_0