source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#overview
.md
outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the
184_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#overview
.md
also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We
184_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#overview
.md
per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available.* This model was contributed by [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
184_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#usage-tips
.md
- XLM-RoBERTa is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require `lang` tensors to understand which language is used, and should be able to determine the correct language from the input ids. - Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language.
184_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/>
184_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
<PipelineTag pipeline="text-classification"/> - A blog post on how to [finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS](https://www.philschmid.de/habana-distributed-training) - [`XLMRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
184_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [`TFXLMRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb).
184_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [`FlaxXLMRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification](https://huggingface.co/docs/transformers/tasks/sequence_classification) chapter of the 🤗 Hugging Face Task Guides.
184_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification"/> - [`XLMRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb).
184_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [`TFXLMRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxXLMRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification).
184_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="text-generation"/> - [`XLMRobertaForCausalLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
184_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [Causal language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling) chapter of the 🤗 Hugging Face Task Guides. - [Causal language modeling task guide](../tasks/language_modeling) <PipelineTag pipeline="fill-mask"/>
184_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [Causal language modeling task guide](../tasks/language_modeling) <PipelineTag pipeline="fill-mask"/> - [`XLMRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
184_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [`TFXLMRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
184_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [`FlaxXLMRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling](../tasks/masked_language_modeling)
184_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [Masked language modeling](../tasks/masked_language_modeling) <PipelineTag pipeline="question-answering"/> - [`XLMRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb).
184_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [`TFXLMRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxXLMRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering).
184_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`XLMRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb).
184_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
- [`TFXLMRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) 🚀 Deploy - A blog post on how to [Deploy Serverless XLM RoBERTa on AWS Lambda](https://www.philschmid.de/multilingual-serverless-xlm-roberta-with-huggingface).
184_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#resources
.md
<Tip> This implementation is the same as RoBERTa. Refer to the [documentation of RoBERTa](roberta) for usage examples as well as the information relative to the inputs and outputs. </Tip>
184_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
This is the configuration class to store the configuration of a [`XLMRobertaModel`] or a [`TFXLMRobertaModel`]. It is used to instantiate a XLM-RoBERTa model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the XLMRoBERTa [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) architecture.
184_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
[FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the XLM-RoBERTa model. Defines the number of different tokens that can be represented by
184_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
Vocabulary size of the XLM-RoBERTa model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`XLMRobertaModel`] or [`TFXLMRobertaModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
184_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
184_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities.
184_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`XLMRobertaModel`] or [`TFXLMRobertaModel`].
184_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
The vocabulary size of the `token_type_ids` passed when calling [`XLMRobertaModel`] or [`TFXLMRobertaModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
184_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
184_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`):
184_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. Examples: ```python >>> from transformers import XLMRobertaConfig, XLMRobertaModel
184_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaconfig
.md
>>> # Initializing a XLM-RoBERTa FacebookAI/xlm-roberta-base style configuration >>> configuration = XLMRobertaConfig() >>> # Initializing a model (with random weights) from the FacebookAI/xlm-roberta-base style configuration >>> model = XLMRobertaModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
184_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizer
.md
Adapted from [`RobertaTokenizer`] and [`XLNetTokenizer`]. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. bos_token (`str`, *optional*, defaults to `"<s>"`):
184_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizer
.md
Args: vocab_file (`str`): Path to the vocabulary file. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip>
184_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizer
.md
</Tip> eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
184_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizer
.md
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`): The classifier token which is used when doing sequence classification (classification of the whole sequence
184_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizer
.md
The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`):
184_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizer
.md
token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. mask_token (`str`, *optional*, defaults to `"<mask>"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
184_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizer
.md
sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results.
184_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizer
.md
- `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Attributes: sp_model (`SentencePieceProcessor`):
184_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizer
.md
BPE-dropout. Attributes: sp_model (`SentencePieceProcessor`): The *SentencePiece* processor that is used for every conversion (string, tokens and IDs). Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
184_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizerfast
.md
Construct a "fast" XLM-RoBERTa tokenizer (backed by HuggingFace's *tokenizers* library). Adapted from [`RobertaTokenizer`] and [`XLNetTokenizer`]. Based on [BPE](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models). This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file.
184_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizerfast
.md
refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"</s>"`):
184_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizerfast
.md
sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
184_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizerfast
.md
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`): The classifier token which is used when doing sequence classification (classification of the whole sequence
184_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizerfast
.md
The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`):
184_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizerfast
.md
token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. mask_token (`str`, *optional*, defaults to `"<mask>"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. additional_special_tokens (`List[str]`, *optional*, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`):
184_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertatokenizerfast
.md
additional_special_tokens (`List[str]`, *optional*, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`): Additional special tokens used by the tokenizer. <frameworkcontent> <pt>
184_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertamodel
.md
The bare XLM-RoBERTa Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
184_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertamodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
184_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertamodel
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in [Attention is
184_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertamodel
.md
cross-attention is added between the self-attention layers, following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
184_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertamodel
.md
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. Methods: forward
184_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaforcausallm
.md
XLM-RoBERTa Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
184_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaforcausallm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
184_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaforcausallm
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
184_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaformaskedlm
.md
XLM-RoBERTa Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
184_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaformaskedlm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
184_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaformaskedlm
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
184_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaforsequenceclassification
.md
XLM-RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
184_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
184_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaforsequenceclassification
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
184_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaformultiplechoice
.md
XLM-RoBERTa Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
184_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaformultiplechoice
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
184_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaformultiplechoice
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
184_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertafortokenclassification
.md
XLM-RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
184_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertafortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
184_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertafortokenclassification
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
184_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaforquestionanswering
.md
XLM-RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
184_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaConfig`]): Model configuration class with all the parameters of the
184_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlmrobertaforquestionanswering
.md
and behavior. Parameters: config ([`XLMRobertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
184_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#tfxlmrobertamodel
.md
No docstring available for TFXLMRobertaModel Methods: call
184_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#tfxlmrobertaforcausallm
.md
No docstring available for TFXLMRobertaForCausalLM Methods: call
184_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#tfxlmrobertaformaskedlm
.md
No docstring available for TFXLMRobertaForMaskedLM Methods: call
184_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#tfxlmrobertaforsequenceclassification
.md
No docstring available for TFXLMRobertaForSequenceClassification Methods: call
184_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#tfxlmrobertaformultiplechoice
.md
No docstring available for TFXLMRobertaForMultipleChoice Methods: call
184_19_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#tfxlmrobertafortokenclassification
.md
No docstring available for TFXLMRobertaForTokenClassification Methods: call
184_20_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#tfxlmrobertaforquestionanswering
.md
No docstring available for TFXLMRobertaForQuestionAnswering Methods: call </tf> <jax>
184_21_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#flaxxlmrobertamodel
.md
No docstring available for FlaxXLMRobertaModel Methods: __call__
184_22_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#flaxxlmrobertaforcausallm
.md
No docstring available for FlaxXLMRobertaForCausalLM Methods: __call__
184_23_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#flaxxlmrobertaformaskedlm
.md
No docstring available for FlaxXLMRobertaForMaskedLM Methods: __call__
184_24_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#flaxxlmrobertaforsequenceclassification
.md
No docstring available for FlaxXLMRobertaForSequenceClassification Methods: __call__
184_25_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#flaxxlmrobertaformultiplechoice
.md
No docstring available for FlaxXLMRobertaForMultipleChoice Methods: __call__
184_26_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#flaxxlmrobertafortokenclassification
.md
No docstring available for FlaxXLMRobertaForTokenClassification Methods: __call__
184_27_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#flaxxlmrobertaforquestionanswering
.md
No docstring available for FlaxXLMRobertaForQuestionAnswering Methods: __call__ </jax> </frameworkcontent>
184_28_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
185_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
185_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#overview
.md
DiT was proposed in [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. DiT applies the self-supervised objective of [BEiT](beit) (BERT pre-training of Image Transformers) to 42 million document images, allowing for state-of-the-art results on tasks including: - document image classification: the [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/) dataset (a collection of
185_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#overview
.md
- document image classification: the [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/) dataset (a collection of 400,000 images belonging to one of 16 classes). - document layout analysis: the [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) dataset (a collection of more than 360,000 document images constructed by automatically parsing PubMed XML files). - table detection: the [ICDAR 2019 cTDaR](https://github.com/cndplab-founder/ICDAR2019_cTDaR) dataset (a collection of
185_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#overview
.md
- table detection: the [ICDAR 2019 cTDaR](https://github.com/cndplab-founder/ICDAR2019_cTDaR) dataset (a collection of 600 training images and 240 testing images). The abstract from the paper is the following:
185_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#overview
.md
*Image Transformer has recently achieved significant progress for natural image understanding, either using supervised (ViT, DeiT, etc.) or self-supervised (BEiT, MAE, etc.) pre-training techniques. In this paper, we propose DiT, a self-supervised pre-trained Document Image Transformer model using large-scale unlabeled text images for Document AI tasks, which is essential since no supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in
185_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#overview
.md
supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in a variety of vision-based Document AI tasks, including document image classification, document layout analysis, as well as table detection. Experiment results have illustrated that the self-supervised pre-trained DiT model achieves new state-of-the-art results on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9) and
185_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#overview
.md
on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9) and table detection (94.23 → 96.55). *
185_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dit_architecture.jpg" alt="drawing" width="600"/> <small> Summary of the approach. Taken from the [original paper](https://arxiv.org/abs/2203.02378). </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/unilm/tree/master/dit).
185_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#usage-tips
.md
One can directly use the weights of DiT with the AutoModel API: ```python from transformers import AutoModel model = AutoModel.from_pretrained("microsoft/dit-base") ``` This will load the model pre-trained on masked image modeling. Note that this won't include the language modeling head on top, used to predict visual tokens. To include the head, you can load the weights into a `BeitForMaskedImageModeling` model, like so: ```python from transformers import BeitForMaskedImageModeling
185_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#usage-tips
.md
model = BeitForMaskedImageModeling.from_pretrained("microsoft/dit-base") ``` You can also load a fine-tuned model from the [hub](https://huggingface.co/models?other=dit), like so: ```python from transformers import AutoModelForImageClassification
185_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#usage-tips
.md
model = AutoModelForImageClassification.from_pretrained("microsoft/dit-base-finetuned-rvlcdip") ``` This particular checkpoint was fine-tuned on [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/), an important benchmark for document image classification. A notebook that illustrates inference for document image classification can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DiT/Inference_with_DiT_(Document_Image_Transformer)_for_document_image_classification.ipynb).
185_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiT. <PipelineTag pipeline="image-classification"/> - [`BeitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
185_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dit.md
https://huggingface.co/docs/transformers/en/model_doc/dit/#resources
.md
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <Tip> As DiT's architecture is equivalent to that of BEiT, one can refer to [BEiT's documentation page](beit) for all tips, code examples and notebooks. </Tip>
185_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
186_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
186_0_1