source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormermodel
.md
The Graphormer model is a graph-encoder model. It goes from a graph to its representation. If you want to use the model for a downstream classification task, use GraphormerForGraphClassification instead. For any other downstream task, feel free to add a new class, or combine this model with a downstream model of your choice, following the example in GraphormerForGraphClassification. Methods: forward
177_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerforgraphclassification
.md
This model can be used for graph-level classification or regression tasks. It can be trained on - regression (by setting config.num_classes to 1); there should be one float-type label per graph - one task classification (by setting config.num_classes to the number of classes); there should be one integer label per graph - binary multi-task classification (by setting config.num_classes to the number of labels); there should be a list of integer labels for each graph. Methods: forward
177_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
178_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
178_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#overview
.md
The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's BERT model released in 2018 and Facebook's RoBERTa model released in 2019. It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa. The abstract from the paper is the following:
178_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#overview
.md
RoBERTa. The abstract from the paper is the following: *Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the
178_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#overview
.md
disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to
178_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#overview
.md
contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9%
178_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#overview
.md
the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.* This model was contributed by [DeBERTa](https://huggingface.co/DeBERTa). This model TF 2.0 implementation was
178_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#overview
.md
This model was contributed by [DeBERTa](https://huggingface.co/DeBERTa). This model TF 2.0 implementation was contributed by [kamalkraj](https://huggingface.co/kamalkraj) . The original code can be found [here](https://github.com/microsoft/DeBERTa).
178_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/>
178_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
<PipelineTag pipeline="text-classification"/> - A blog post on how to [Accelerate Large Model Training using DeepSpeed](https://huggingface.co/blog/accelerate-deepspeed) with DeBERTa. - A blog post on [Supercharged Customer Service with Machine Learning](https://huggingface.co/blog/supercharge-customer-service-with-machine-learning) with DeBERTa.
178_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
- [`DebertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
178_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
- [`TFDebertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification" />
178_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification" /> - [`DebertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb).
178_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
- [`TFDebertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course.
178_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Byte-Pair Encoding tokenization](https://huggingface.co/course/chapter6/5?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="fill-mask"/>
178_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
- [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="fill-mask"/> - [`DebertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
178_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
- [`TFDebertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling)
178_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
- [Masked language modeling task guide](../tasks/masked_language_modeling) <PipelineTag pipeline="question-answering"/> - [`DebertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb).
178_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#resources
.md
- [`TFDebertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering)
178_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaconfig
.md
This is the configuration class to store the configuration of a [`DebertaModel`] or a [`TFDebertaModel`]. It is used to instantiate a DeBERTa model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DeBERTa [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) architecture.
178_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaconfig
.md
[microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Arguments: vocab_size (`int`, *optional*, defaults to 50265): Vocabulary size of the DeBERTa model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`DebertaModel`] or [`TFDebertaModel`].
178_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaconfig
.md
`inputs_ids` passed when calling [`DebertaModel`] or [`TFDebertaModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072):
178_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaconfig
.md
intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"`, `"gelu"`, `"tanh"`, `"gelu_fast"`, `"mish"`, `"linear"`, `"sigmoid"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
178_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaconfig
.md
are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
178_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaconfig
.md
just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 0): The vocabulary size of the `token_type_ids` passed when calling [`DebertaModel`] or [`TFDebertaModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers.
178_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaconfig
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. relative_attention (`bool`, *optional*, defaults to `False`): Whether use relative position encoding. max_relative_positions (`int`, *optional*, defaults to 1): The range of relative positions `[-max_position_embeddings, max_position_embeddings]`. Use the same value as `max_position_embeddings`. pad_token_id (`int`, *optional*, defaults to 0): The value used to pad input_ids.
178_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaconfig
.md
as `max_position_embeddings`. pad_token_id (`int`, *optional*, defaults to 0): The value used to pad input_ids. position_biased_input (`bool`, *optional*, defaults to `True`): Whether add absolute position embedding to content embedding. pos_att_type (`List[str]`, *optional*): The type of relative position attention, it can be a combination of `["p2c", "c2p"]`, e.g. `["p2c"]`, `["p2c", "c2p"]`. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers.
178_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaconfig
.md
`["p2c", "c2p"]`. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. legacy (`bool`, *optional*, defaults to `True`): Whether or not the model should use the legacy `LegacyDebertaOnlyMLMHead`, which does not work properly for mask infilling tasks. Example: ```python >>> from transformers import DebertaConfig, DebertaModel
178_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaconfig
.md
>>> # Initializing a DeBERTa microsoft/deberta-base style configuration >>> configuration = DebertaConfig() >>> # Initializing a model (with random weights) from the microsoft/deberta-base style configuration >>> model = DebertaModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
178_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizer
.md
Construct a DeBERTa tokenizer. Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: ```python >>> from transformers import DebertaTokenizer >>> tokenizer = DebertaTokenizer.from_pretrained("microsoft/deberta-base") >>> tokenizer("Hello world")["input_ids"] [1, 31414, 232, 2]
178_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizer
.md
>>> tokenizer(" Hello world")["input_ids"] [1, 20920, 232, 2] ``` You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. <Tip> When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). </Tip>
178_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizer
.md
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). </Tip> This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See
178_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizer
.md
errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. bos_token (`str`, *optional*, defaults to `"[CLS]"`): The beginning of sequence token. eos_token (`str`, *optional*, defaults to `"[SEP]"`): The end of sequence token. sep_token (`str`, *optional*, defaults to `"[SEP]"`):
178_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizer
.md
The end of sequence token. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence
178_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizer
.md
The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"[PAD]"`):
178_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizer
.md
token instead. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. add_prefix_space (`bool`, *optional*, defaults to `False`):
178_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizer
.md
modeling. This is the token which the model will try to predict. add_prefix_space (`bool`, *optional*, defaults to `False`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Deberta tokenizer detect beginning of words by the preceding space). add_bos_token (`bool`, *optional*, defaults to `False`): Whether or not to add an initial <|endoftext|> to the input. This allows to treat the leading word just as any other word.
178_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizer
.md
Whether or not to add an initial <|endoftext|> to the input. This allows to treat the leading word just as any other word. Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
178_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizerfast
.md
Construct a "fast" DeBERTa tokenizer (backed by HuggingFace's *tokenizers* library). Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: ```python >>> from transformers import DebertaTokenizerFast
178_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizerfast
.md
>>> tokenizer = DebertaTokenizerFast.from_pretrained("microsoft/deberta-base") >>> tokenizer("Hello world")["input_ids"] [1, 31414, 232, 2]
178_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizerfast
.md
>>> tokenizer(" Hello world")["input_ids"] [1, 20920, 232, 2] ``` You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer, but since the model was not pretrained this way, it might yield a decrease in performance. <Tip> When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`. </Tip> This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
178_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizerfast
.md
</Tip> This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`, *optional*): Path to the vocabulary file. merges_file (`str`, *optional*): Path to the merges file. tokenizer_file (`str`, *optional*): The path to a tokenizer file to use instead of the vocab file. errors (`str`, *optional*, defaults to `"replace"`):
178_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizerfast
.md
The path to a tokenizer file to use instead of the vocab file. errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. bos_token (`str`, *optional*, defaults to `"[CLS]"`): The beginning of sequence token. eos_token (`str`, *optional*, defaults to `"[SEP]"`): The end of sequence token. sep_token (`str`, *optional*, defaults to `"[SEP]"`):
178_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizerfast
.md
The end of sequence token. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence
178_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizerfast
.md
The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"[PAD]"`):
178_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizerfast
.md
token instead. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. add_prefix_space (`bool`, *optional*, defaults to `False`):
178_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertatokenizerfast
.md
modeling. This is the token which the model will try to predict. add_prefix_space (`bool`, *optional*, defaults to `False`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Deberta tokenizer detect beginning of words by the preceding space). Methods: build_inputs_with_special_tokens - create_token_type_ids_from_sequences <frameworkcontent> <pt>
178_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertamodel
.md
The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top. The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
178_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertamodel
.md
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DebertaConfig`]): Model configuration class with all the parameters of the model.
178_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertamodel
.md
and behavior. Parameters: config ([`DebertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
178_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertapretrainedmodel
.md
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.
178_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaformaskedlm
.md
DeBERTa Model with a `language modeling` head on top. The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
178_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaformaskedlm
.md
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DebertaConfig`]): Model configuration class with all the parameters of the model.
178_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaformaskedlm
.md
and behavior. Parameters: config ([`DebertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
178_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaforsequenceclassification
.md
DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
178_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaforsequenceclassification
.md
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
178_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaforsequenceclassification
.md
and behavior. Parameters: config ([`DebertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
178_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertafortokenclassification
.md
DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
178_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertafortokenclassification
.md
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
178_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertafortokenclassification
.md
and behavior. Parameters: config ([`DebertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
178_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaforquestionanswering
.md
DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build
178_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaforquestionanswering
.md
Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
178_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaforquestionanswering
.md
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DebertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
178_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#debertaforquestionanswering
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
178_11_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#tfdebertamodel
.md
No docstring available for TFDebertaModel Methods: call
178_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#tfdebertapretrainedmodel
.md
No docstring available for TFDebertaPreTrainedModel Methods: call
178_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#tfdebertaformaskedlm
.md
No docstring available for TFDebertaForMaskedLM Methods: call
178_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#tfdebertaforsequenceclassification
.md
No docstring available for TFDebertaForSequenceClassification Methods: call
178_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#tfdebertafortokenclassification
.md
No docstring available for TFDebertaForTokenClassification Methods: call
178_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta.md
https://huggingface.co/docs/transformers/en/model_doc/deberta/#tfdebertaforquestionanswering
.md
No docstring available for TFDebertaForQuestionAnswering Methods: call </tf> </frameworkcontent>
178_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
179_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
179_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#hybrid-vision-transformer-vit-hybrid
.md
<Tip warning={true}> This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2. You can do so by running the following command: `pip install -U transformers==4.40.2`. </Tip>
179_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#overview
.md
The hybrid Vision Transformer (ViT) model was proposed in [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining
179_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#overview
.md
Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the [plain Vision Transformer](vit), by leveraging a convolutional backbone (specifically, [BiT](bit)) whose features are used as initial "tokens" for the Transformer. The abstract from the paper is the following:
179_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#overview
.md
The abstract from the paper is the following: *While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to
179_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#overview
.md
structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring
179_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#overview
.md
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code (written in JAX) can be found [here](https://github.com/google-research/vision_transformer).
179_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
179_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import ViTHybridForImageClassification model = ViTHybridForImageClassification.from_pretrained("google/vit-hybrid-base-bit-384", attn_implementation="sdpa", torch_dtype=torch.float16) ... ```
179_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#using-scaled-dot-product-attention-sdpa
.md
... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `google/vit-hybrid-base-bit-384` model, we saw the following speedups during inference. | Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) |
179_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#using-scaled-dot-product-attention-sdpa
.md
|--------------|-------------------------------------------|-------------------------------------------|------------------------------| | 1 | 29 | 18 | 1.61 | | 2 | 26 | 18 | 1.44 |
179_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#using-scaled-dot-product-attention-sdpa
.md
| 4 | 25 | 18 | 1.39 | | 8 | 34 | 24 | 1.42 |
179_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT Hybrid. <PipelineTag pipeline="image-classification"/> - [`ViTHybridForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
179_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#resources
.md
- See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
179_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridconfig
.md
This is the configuration class to store the configuration of a [`ViTHybridModel`]. It is used to instantiate a ViT Hybrid model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ViT Hybrid [google/vit-hybrid-base-bit-384](https://huggingface.co/google/vit-hybrid-base-bit-384) architecture.
179_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridconfig
.md
[google/vit-hybrid-base-bit-384](https://huggingface.co/google/vit-hybrid-base-bit-384) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: backbone_config (`Union[Dict[str, Any], PretrainedConfig]`, *optional*): The configuration of the backbone in a dictionary or the config object of the backbone. backbone (`str`, *optional*):
179_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridconfig
.md
The configuration of the backbone in a dictionary or the config object of the backbone. backbone (`str`, *optional*): Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone` is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights. use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
179_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridconfig
.md
use_pretrained_backbone (`bool`, *optional*, defaults to `False`): Whether to use pretrained weights for the backbone. use_timm_backbone (`bool`, *optional*, defaults to `False`): Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers library. backbone_kwargs (`dict`, *optional*): Keyword arguments to be passed to AutoBackbone when loading from a checkpoint e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
179_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridconfig
.md
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072):
179_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridconfig
.md
intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
179_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridconfig
.md
`"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
179_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 1): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels.
179_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridconfig
.md
The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. backbone_featmap_shape (`List[int]`, *optional*, defaults to `[1, 1024, 24, 24]`): Used only for the `hybrid` embedding type. The shape of the feature maps of the backbone. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. Example: ```python >>> from transformers import ViTHybridConfig, ViTHybridModel
179_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridconfig
.md
>>> # Initializing a ViT Hybrid vit-hybrid-base-bit-384 style configuration >>> configuration = ViTHybridConfig() >>> # Initializing a model (with random weights) from the vit-hybrid-base-bit-384 style configuration >>> model = ViTHybridModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
179_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridimageprocessor
.md
Constructs a ViT Hybrid image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`): Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
179_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridimageprocessor
.md
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`):
179_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridimageprocessor
.md
do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the `preprocess` method. crop_size (`Dict[str, int]` *optional*, defaults to 224): Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
179_6_2