source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizer
|
.md
|
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
|
299_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizer
|
.md
|
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
|
299_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizer
|
.md
|
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
|
299_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizer
|
.md
|
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
299_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizer
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
|
299_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizer
|
.md
|
modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
299_6_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
Construct a "fast" RoBERTa tokenizer (backed by HuggingFace's *tokenizers* library), derived from the GPT-2
tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import RobertaTokenizerFast
|
299_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
>>> tokenizer = RobertaTokenizerFast.from_pretrained("FacebookAI/roberta-base")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
|
299_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
>>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip>
|
299_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip>
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
|
299_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
|
299_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
|
299_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
|
299_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
299_7_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
|
299_7_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
trim_offsets (`bool`, *optional*, defaults to `True`):
Whether the post processing step should trim offsets to avoid including whitespaces.
Methods: build_inputs_with_special_tokens
|
299_7_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertatokenizerfast
|
.md
|
Methods: build_inputs_with_special_tokens
<frameworkcontent>
<pt>
|
299_7_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertamodel
|
.md
|
The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
299_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertamodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`RobertaConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
299_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertamodel
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in [Attention is
|
299_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertamodel
|
.md
|
cross-attention is added between the self-attention layers, following the architecture described in [Attention is
all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
|
299_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertamodel
|
.md
|
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
`add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
Methods: forward
|
299_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaforcausallm
|
.md
|
RoBERTa Model with a `language modeling` head on top for CLM fine-tuning.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
299_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaforcausallm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`RobertaConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
299_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaforcausallm
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
299_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaformaskedlm
|
.md
|
RoBERTa Model with a `language modeling` head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
299_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaformaskedlm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`RobertaConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
299_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaformaskedlm
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
299_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaforsequenceclassification
|
.md
|
RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
299_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaforsequenceclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`RobertaConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
299_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaforsequenceclassification
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
299_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaformultiplechoice
|
.md
|
Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
299_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaformultiplechoice
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`RobertaConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
299_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaformultiplechoice
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
299_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertafortokenclassification
|
.md
|
Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
299_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertafortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`RobertaConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
299_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertafortokenclassification
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
299_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaforquestionanswering
|
.md
|
Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
299_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`RobertaConfig`]): Model configuration class with all the parameters of the
|
299_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#robertaforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`RobertaConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
299_14_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#tfrobertamodel
|
.md
|
No docstring available for TFRobertaModel
Methods: call
|
299_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#tfrobertaforcausallm
|
.md
|
No docstring available for TFRobertaForCausalLM
Methods: call
|
299_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#tfrobertaformaskedlm
|
.md
|
No docstring available for TFRobertaForMaskedLM
Methods: call
|
299_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#tfrobertaforsequenceclassification
|
.md
|
No docstring available for TFRobertaForSequenceClassification
Methods: call
|
299_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#tfrobertaformultiplechoice
|
.md
|
No docstring available for TFRobertaForMultipleChoice
Methods: call
|
299_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#tfrobertafortokenclassification
|
.md
|
No docstring available for TFRobertaForTokenClassification
Methods: call
|
299_20_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#tfrobertaforquestionanswering
|
.md
|
No docstring available for TFRobertaForQuestionAnswering
Methods: call
</tf>
<jax>
|
299_21_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#flaxrobertamodel
|
.md
|
No docstring available for FlaxRobertaModel
Methods: __call__
|
299_22_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#flaxrobertaforcausallm
|
.md
|
No docstring available for FlaxRobertaForCausalLM
Methods: __call__
|
299_23_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#flaxrobertaformaskedlm
|
.md
|
No docstring available for FlaxRobertaForMaskedLM
Methods: __call__
|
299_24_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#flaxrobertaforsequenceclassification
|
.md
|
No docstring available for FlaxRobertaForSequenceClassification
Methods: __call__
|
299_25_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#flaxrobertaformultiplechoice
|
.md
|
No docstring available for FlaxRobertaForMultipleChoice
Methods: __call__
|
299_26_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#flaxrobertafortokenclassification
|
.md
|
No docstring available for FlaxRobertaForTokenClassification
Methods: __call__
|
299_27_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/roberta/#flaxrobertaforquestionanswering
|
.md
|
No docstring available for FlaxRobertaForQuestionAnswering
Methods: __call__
</jax>
</frameworkcontent>
|
299_28_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
300_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
300_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#overview
|
.md
|
The Swin Transformer V2 model was proposed in [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
The abstract from the paper is the following:
|
300_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#overview
|
.md
|
*Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a
|
300_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#overview
|
.md
|
resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper
|
300_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#overview
|
.md
|
A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536×1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video
|
300_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#overview
|
.md
|
tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time.*
|
300_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#overview
|
.md
|
This model was contributed by [nandwalritik](https://huggingface.co/nandwalritik).
The original code can be found [here](https://github.com/microsoft/Swin-Transformer).
|
300_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer v2.
<PipelineTag pipeline="image-classification"/>
- [`Swinv2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
300_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#resources
|
.md
|
- See also: [Image classification task guide](../tasks/image_classification)
Besides that:
- [`Swinv2ForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining).
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
300_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
This is the configuration class to store the configuration of a [`Swinv2Model`]. It is used to instantiate a Swin
Transformer v2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Swin Transformer v2
[microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256)
architecture.
|
300_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
[microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 4):
The size (resolution) of each patch.
|
300_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 4):
The size (resolution) of each patch.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
embed_dim (`int`, *optional*, defaults to 96):
Dimensionality of patch embedding.
depths (`list(int)`, *optional*, defaults to `[2, 2, 6, 2]`):
Depth of each layer in the Transformer encoder.
num_heads (`list(int)`, *optional*, defaults to `[3, 6, 12, 24]`):
|
300_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
Depth of each layer in the Transformer encoder.
num_heads (`list(int)`, *optional*, defaults to `[3, 6, 12, 24]`):
Number of attention heads in each layer of the Transformer encoder.
window_size (`int`, *optional*, defaults to 7):
Size of windows.
pretrained_window_sizes (`list(int)`, *optional*, defaults to `[0, 0, 0, 0]`):
Size of windows during pretraining.
mlp_ratio (`float`, *optional*, defaults to 4.0):
Ratio of MLP hidden dimensionality to embedding dimensionality.
|
300_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
mlp_ratio (`float`, *optional*, defaults to 4.0):
Ratio of MLP hidden dimensionality to embedding dimensionality.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether or not a learnable bias should be added to the queries, keys and values.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings and encoder.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
300_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
drop_path_rate (`float`, *optional*, defaults to 0.1):
Stochastic depth rate.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder. If string, `"gelu"`, `"relu"`,
`"selu"` and `"gelu_new"` are supported.
use_absolute_embeddings (`bool`, *optional*, defaults to `False`):
|
300_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
`"selu"` and `"gelu_new"` are supported.
use_absolute_embeddings (`bool`, *optional*, defaults to `False`):
Whether or not to add absolute position embeddings to the patch embeddings.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
encoder_stride (`int`, *optional*, defaults to 32):
|
300_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
The epsilon used by the layer normalization layers.
encoder_stride (`int`, *optional*, defaults to 32):
Factor to increase the spatial resolution by in the decoder head for masked image modeling.
out_features (`List[str]`, *optional*):
If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc.
(depending on how many stages the model has). If unset and `out_indices` is set, will default to the
|
300_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
(depending on how many stages the model has). If unset and `out_indices` is set, will default to the
corresponding stages. If unset and `out_indices` is unset, will default to the last stage.
out_indices (`List[int]`, *optional*):
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
|
300_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
If unset and `out_features` is unset, will default to the last stage.
Example:
```python
>>> from transformers import Swinv2Config, Swinv2Model
|
300_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2config
|
.md
|
>>> # Initializing a Swinv2 microsoft/swinv2-tiny-patch4-window8-256 style configuration
>>> configuration = Swinv2Config()
>>> # Initializing a model (with random weights) from the microsoft/swinv2-tiny-patch4-window8-256 style configuration
>>> model = Swinv2Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
300_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2model
|
.md
|
The bare Swinv2 Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Swinv2Config`]): Model configuration class with all the parameters of the model.
|
300_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2model
|
.md
|
behavior.
Parameters:
config ([`Swinv2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
300_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2formaskedimagemodeling
|
.md
|
Swinv2 Model with a decoder on top for masked image modeling, as proposed in
[SimMIM](https://arxiv.org/abs/2111.09886).
<Tip>
Note that we provide a script to pre-train this model on custom data in our [examples
directory](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining).
</Tip>
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
|
300_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2formaskedimagemodeling
|
.md
|
</Tip>
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Swinv2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
300_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2formaskedimagemodeling
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
300_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2forimageclassification
|
.md
|
Swinv2 Model transformer with an image classification head on top (a linear layer on top of the final hidden state
of the [CLS] token) e.g. for ImageNet.
<Tip>
Note that it's possible to fine-tune SwinV2 on higher resolution images than the ones it has been trained on, by
setting `interpolate_pos_encoding` to `True` in the forward of the model. This will interpolate the pre-trained
position embeddings to the higher resolution.
</Tip>
|
300_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2forimageclassification
|
.md
|
position embeddings to the higher resolution.
</Tip>
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Swinv2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
300_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swinv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/swinv2/#swinv2forimageclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
300_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
301_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
301_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#overview
|
.md
|
The LED model was proposed in [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz
Beltagy, Matthew E. Peters, Arman Cohan.
The abstract from the paper is the following:
*Transformer-based models are unable to process long sequences due to their self-attention operation, which scales
quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention
|
301_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#overview
|
.md
|
quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or
longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we
|
301_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#overview
|
.md
|
windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we
evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our
pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on
|
301_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#overview
|
.md
|
pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on
WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting
long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization
dataset.*
|
301_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#usage-tips
|
.md
|
- [`LEDForConditionalGeneration`] is an extension of
[`BartForConditionalGeneration`] exchanging the traditional *self-attention* layer with
*Longformer*'s *chunked self-attention* layer. [`LEDTokenizer`] is an alias of
[`BartTokenizer`].
- LED works very well on long-range *sequence-to-sequence* tasks where the `input_ids` largely exceed a length of
1024 tokens.
- LED pads the `input_ids` to be a multiple of `config.attention_window` if required. Therefore a small speed-up is
|
301_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#usage-tips
|
.md
|
- LED pads the `input_ids` to be a multiple of `config.attention_window` if required. Therefore a small speed-up is
gained, when [`LEDTokenizer`] is used with the `pad_to_multiple_of` argument.
- LED makes use of *global attention* by means of the `global_attention_mask` (see
[`LongformerModel`]). For summarization, it is advised to put *global attention* only on the first
`<s>` token. For question answering, it is advised to put *global attention* on all tokens of the question.
|
301_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#usage-tips
|
.md
|
`<s>` token. For question answering, it is advised to put *global attention* on all tokens of the question.
- To fine-tune LED on all 16384, *gradient checkpointing* can be enabled in case training leads to out-of-memory (OOM)
errors. This can be done by executing `model.gradient_checkpointing_enable()`.
Moreover, the `use_cache=False`
flag can be used to disable the caching mechanism to save memory.
|
301_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#usage-tips
|
.md
|
Moreover, the `use_cache=False`
flag can be used to disable the caching mechanism to save memory.
- LED is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten).
|
301_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#resources
|
.md
|
- [A notebook showing how to evaluate LED](https://colab.research.google.com/drive/12INTTR6n64TzS4RrXZxMSXfrOd9Xzamo?usp=sharing).
- [A notebook showing how to fine-tune LED](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing).
- [Text classification task guide](../tasks/sequence_classification)
- [Question answering task guide](../tasks/question_answering)
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization)
|
301_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#ledconfig
|
.md
|
This is the configuration class to store the configuration of a [`LEDModel`]. It is used to instantiate an LED
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LED
[allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
301_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#ledconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the LED model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`LEDModel`] or [`TFLEDModel`].
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
|
301_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#ledconfig
|
.md
|
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 12):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16):
|
301_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#ledconfig
|
.md
|
decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
|
301_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#ledconfig
|
.md
|
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
301_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
|
https://huggingface.co/docs/transformers/en/model_doc/led/#ledconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for classifier.
max_encoder_position_embeddings (`int`, *optional*, defaults to 16384):
The maximum sequence length that the encoder might ever be used with.
|
301_4_5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.