source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#inference-with-50-batches
.md
|8 |256 |0.271 |0.231 |17.271 |91.202 |92.246 |-1.132 | |8 |512 |0.602 |0.48 |25.47 |186.159 |152.564 |22.021 | |16 |128 |0.252 |0.224 |12.506 |91.202 |91.722 |-0.567 |
364_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#inference-with-50-batches
.md
|16 |256 |0.526 |0.448 |17.604 |148.378 |150.467 |-1.388 | |16 |512 |1.203 |0.96 |25.365 |338.293 |271.102 |24.784 | This model was contributed by [lysandre](https://huggingface.co/lysandre). This model jax version was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/google-research/ALBERT).
364_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
The resources provided in the following sections consist of a list of official Hugging Face and community (indicated by 🌎) resources to help you get started with AlBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/>
364_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
<PipelineTag pipeline="text-classification"/> - [`AlbertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification). - [`TFAlbertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification).
364_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
- [`FlaxAlbertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - Check the [Text classification task guide](../tasks/sequence_classification) on how to use the model. <PipelineTag pipeline="token-classification"/>
364_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
<PipelineTag pipeline="token-classification"/> - [`AlbertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification).
364_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
- [`TFAlbertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxAlbertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification).
364_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - Check the [Token classification task guide](../tasks/token_classification) on how to use the model. <PipelineTag pipeline="fill-mask"/>
364_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
<PipelineTag pipeline="fill-mask"/> - [`AlbertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
364_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
- [`TFAlbertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
364_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
- [`FlaxAlbertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course.
364_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
- [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - Check the [Masked language modeling task guide](../tasks/masked_language_modeling) on how to use the model. <PipelineTag pipeline="question-answering"/>
364_7_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
<PipelineTag pipeline="question-answering"/> - [`AlbertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb).
364_7_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
- [`TFAlbertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxAlbertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering).
364_7_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - Check the [Question answering task guide](../tasks/question_answering) on how to use the model. **Multiple choice** - [`AlbertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb).
364_7_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#resources
.md
- [`TFAlbertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - Check the [Multiple choice task guide](../tasks/multiple_choice) on how to use the model.
364_7_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
This is the configuration class to store the configuration of a [`AlbertModel`] or a [`TFAlbertModel`]. It is used to instantiate an ALBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ALBERT [albert/albert-xxlarge-v2](https://huggingface.co/albert/albert-xxlarge-v2) architecture.
364_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
[albert/albert-xxlarge-v2](https://huggingface.co/albert/albert-xxlarge-v2) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30000): Vocabulary size of the ALBERT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`AlbertModel`] or [`TFAlbertModel`].
364_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
`inputs_ids` passed when calling [`AlbertModel`] or [`TFAlbertModel`]. embedding_size (`int`, *optional*, defaults to 128): Dimensionality of vocabulary embeddings. hidden_size (`int`, *optional*, defaults to 4096): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_hidden_groups (`int`, *optional*, defaults to 1):
364_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
Number of hidden layers in the Transformer encoder. num_hidden_groups (`int`, *optional*, defaults to 1): Number of groups for the hidden layers, parameters in the same group are shared. num_attention_heads (`int`, *optional*, defaults to 64): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 16384): The dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
364_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
The dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. inner_group_num (`int`, *optional*, defaults to 1): The number of inner repetition of attention and ffn. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu_new"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0):
364_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
`"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large
364_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
The maximum sequence length that this model might ever be used with. Typically set this to something large (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`AlbertModel`] or [`TFAlbertModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12):
364_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. classifier_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for attached classifiers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
364_8_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). pad_token_id (`int`, *optional*, defaults to 0): Padding token id.
364_8_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
pad_token_id (`int`, *optional*, defaults to 0): Padding token id. bos_token_id (`int`, *optional*, defaults to 2): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 3): End of stream token id. Examples: ```python >>> from transformers import AlbertConfig, AlbertModel
364_8_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
>>> # Initializing an ALBERT-xxlarge style configuration >>> albert_xxlarge_configuration = AlbertConfig() >>> # Initializing an ALBERT-base style configuration >>> albert_base_configuration = AlbertConfig( ... hidden_size=768, ... num_attention_heads=12, ... intermediate_size=3072, ... ) >>> # Initializing a model (with random weights) from the ALBERT-base style configuration >>> model = AlbertModel(albert_xxlarge_configuration)
364_8_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertconfig
.md
>>> # Accessing the model configuration >>> configuration = model.config ```
364_8_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizer
.md
Construct an ALBERT tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that contains the vocabulary necessary to instantiate a tokenizer.
364_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizer
.md
contains the vocabulary necessary to instantiate a tokenizer. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. remove_space (`bool`, *optional*, defaults to `True`): Whether or not to strip the text when tokenizing (removing excess spaces before and after the string). keep_accents (`bool`, *optional*, defaults to `False`): Whether or not to keep accents when tokenizing. bos_token (`str`, *optional*, defaults to `"[CLS]"`):
364_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizer
.md
Whether or not to keep accents when tokenizing. bos_token (`str`, *optional*, defaults to `"[CLS]"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"[SEP]"`): The end of sequence token. <Tip>
364_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizer
.md
</Tip> eos_token (`str`, *optional*, defaults to `"[SEP]"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`):
364_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizer
.md
token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths.
364_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizer
.md
The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`):
364_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizer
.md
mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set:
364_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizer
.md
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
364_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizer
.md
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Attributes: sp_model (`SentencePieceProcessor`): The *SentencePiece* processor that is used for every conversion (string, tokens and IDs). Methods: build_inputs_with_special_tokens - get_special_tokens_mask
364_9_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizer
.md
Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
364_9_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizerfast
.md
Construct a "fast" ALBERT tokenizer (backed by HuggingFace's *tokenizers* library). Based on [Unigram](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=unigram#models). This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
364_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizerfast
.md
Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that contains the vocabulary necessary to instantiate a tokenizer. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. remove_space (`bool`, *optional*, defaults to `True`): Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
364_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizerfast
.md
Whether or not to strip the text when tokenizing (removing excess spaces before and after the string). keep_accents (`bool`, *optional*, defaults to `False`): Whether or not to keep accents when tokenizing. bos_token (`str`, *optional*, defaults to `"[CLS]"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of
364_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizerfast
.md
<Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"[SEP]"`): The end of sequence token. .. note:: When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. unk_token (`str`, *optional*, defaults to `"<unk>"`):
364_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizerfast
.md
that is used for the end of sequence. The token used is the `sep_token`. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
364_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizerfast
.md
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`):
364_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#alberttokenizerfast
.md
cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
364_10_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albert-specific-outputs
.md
models.albert.modeling_albert.AlbertForPreTrainingOutput Output type of [`AlbertForPreTraining`]. Args: loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
364_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albert-specific-outputs
.md
(classification) loss. prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). sop_logits (`torch.FloatTensor` of shape `(batch_size, 2)`): Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
364_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albert-specific-outputs
.md
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
364_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albert-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`.
364_11_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albert-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. [[autodoc]] models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput: modeling_tf_albert requires the TensorFlow library but it was not found in your environment. However, we were able to find a PyTorch installation. PyTorch classes do not begin
364_11_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albert-specific-outputs
.md
However, we were able to find a PyTorch installation. PyTorch classes do not begin with "TF", but are otherwise identically named to our TF classes. If you want to use PyTorch, please use those classes instead! If you really do want to use TensorFlow, please follow the instructions on the installation page https://www.tensorflow.org/install that match your environment. <frameworkcontent> <pt>
364_11_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertmodel
.md
The bare ALBERT Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
364_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Args: config ([`AlbertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
364_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
364_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertforpretraining
.md
Albert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a `sentence order prediction (classification)` head. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
364_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertforpretraining
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Args: config ([`AlbertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
364_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertforpretraining
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
364_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertformaskedlm
.md
Albert Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
364_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertformaskedlm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Args: config ([`AlbertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
364_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertformaskedlm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
364_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertforsequenceclassification
.md
Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
364_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Args: config ([`AlbertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
364_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertforsequenceclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
364_15_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertformultiplechoice
.md
Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
364_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertformultiplechoice
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Args: config ([`AlbertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
364_16_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertformultiplechoice
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
364_16_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertfortokenclassification
.md
Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
364_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertfortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Args: config ([`AlbertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
364_17_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertfortokenclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
364_17_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertforquestionanswering
.md
Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
364_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Args: config ([`AlbertConfig`]): Model configuration class with all the parameters of the model.
364_18_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albertforquestionanswering
.md
and behavior. Args: config ([`AlbertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
364_18_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#tfalbertmodel
.md
No docstring available for TFAlbertModel Methods: call
364_19_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#tfalbertforpretraining
.md
No docstring available for TFAlbertForPreTraining Methods: call
364_20_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#tfalbertformaskedlm
.md
No docstring available for TFAlbertForMaskedLM Methods: call
364_21_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#tfalbertforsequenceclassification
.md
No docstring available for TFAlbertForSequenceClassification Methods: call
364_22_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#tfalbertformultiplechoice
.md
No docstring available for TFAlbertForMultipleChoice Methods: call
364_23_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#tfalbertfortokenclassification
.md
No docstring available for TFAlbertForTokenClassification Methods: call
364_24_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#tfalbertforquestionanswering
.md
No docstring available for TFAlbertForQuestionAnswering Methods: call </tf> <jax>
364_25_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#flaxalbertmodel
.md
No docstring available for FlaxAlbertModel Methods: __call__
364_26_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#flaxalbertforpretraining
.md
No docstring available for FlaxAlbertForPreTraining Methods: __call__
364_27_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#flaxalbertformaskedlm
.md
No docstring available for FlaxAlbertForMaskedLM Methods: __call__
364_28_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#flaxalbertforsequenceclassification
.md
No docstring available for FlaxAlbertForSequenceClassification Methods: __call__
364_29_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#flaxalbertformultiplechoice
.md
No docstring available for FlaxAlbertForMultipleChoice Methods: __call__
364_30_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#flaxalbertfortokenclassification
.md
No docstring available for FlaxAlbertForTokenClassification Methods: __call__
364_31_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#flaxalbertforquestionanswering
.md
No docstring available for FlaxAlbertForQuestionAnswering Methods: __call__ </jax> </frameworkcontent>
364_32_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
365_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
365_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#overview
.md
The DINOv2 with Registers model was proposed in [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) by Timothée Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski. The [Vision Transformer](vit) (ViT) is a transformer encoder model (BERT-like) originally introduced to do supervised image classification on ImageNet.
365_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#overview
.md
Next, people figured out ways to make ViT work really well on self-supervised image feature extraction (i.e. learning meaningful features, also called embeddings) on images without requiring any labels. Some example papers here include [DINOv2](dinov2) and [MAE](vit_mae).
365_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#overview
.md
The authors of DINOv2 noticed that ViTs have artifacts in attention maps. It’s due to the model using some image patches as “registers”. The authors propose a fix: just add some new tokens (called "register" tokens), which you only use during pre-training (and throw away afterwards). This results in: - no artifacts - interpretable attention maps - and improved performances. The abstract from the paper is the following:
365_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#overview
.md
*Transformers have recently emerged as a powerful tool for learning visual representations. In this paper, we identify and characterize artifacts in feature maps of both supervised and self-supervised ViT networks. The artifacts correspond to high-norm tokens appearing during inference primarily in low-informative background areas of images, that are repurposed for internal computations. We propose a simple yet effective solution based on providing additional tokens to the input sequence of the Vision
365_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#overview
.md
We propose a simple yet effective solution based on providing additional tokens to the input sequence of the Vision Transformer to fill that role. We show that this solution fixes that problem entirely for both supervised and self-supervised models, sets a new state of the art for self-supervised visual models on dense visual prediction tasks, enables object discovery methods with larger models, and most importantly leads to smoother feature maps and attention maps for downstream visual processing.*
365_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dinov2_with_registers_visualization.png" alt="drawing" width="600"/> <small> Visualization of attention maps of various models trained with vs. without registers. Taken from the <a href="https://arxiv.org/abs/2309.16588">original paper</a>. </small> Tips: - Usage of DINOv2 with Registers is identical to DINOv2 without, you'll just get better performance.
365_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#overview
.md
Tips: - Usage of DINOv2 with Registers is identical to DINOv2 without, you'll just get better performance. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/dinov2).
365_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
This is the configuration class to store the configuration of a [`Dinov2WithRegistersModel`]. It is used to instantiate an Dinov2WithRegisters model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DINOv2 with Registers [facebook/dinov2-with-registers-base](https://huggingface.co/facebook/dinov2-with-registers-base) architecture.
365_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
[facebook/dinov2-with-registers-base](https://huggingface.co/facebook/dinov2-with-registers-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder.
365_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. mlp_ratio (`int`, *optional*, defaults to 4): Ratio of the hidden size of the MLPs relative to the `hidden_size`. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
365_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
365_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image.
365_2_4