source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-tips | .md | - Our model supports lightweight prompt tuning following [Prefix-tuning](https://arxiv.org/abs/2101.00190) with method `set_lightweight_tuning()`. | 145_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-examples | .md | For summarization, it is an example to use MVP and MVP with summarization-specific prompts.
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
>>> model_with_prompt = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-summarization") | 145_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-examples | .md | >>> inputs = tokenizer(
... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Why You Shouldn't Quit Your Job"] | 145_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-examples | .md | >>> generated_ids = model_with_prompt.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Don't do it if these are your reasons"]
```
For data-to-text generation, it is an example to use MVP and multi-task pre-trained variants.
```python
>>> from transformers import MvpTokenizerFast, MvpForConditionalGeneration | 145_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-examples | .md | >>> tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
>>> model_with_mtl = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text") | 145_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-examples | .md | >>> inputs = tokenizer(
... "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Stan Lee created the character of Iron Man, a fictional superhero appearing in American comic'] | 145_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-examples | .md | >>> generated_ids = model_with_mtl.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.']
```
For lightweight tuning, *i.e.*, fixing the model and only tuning prompts, you can load MVP with randomly initialized prompts or with task-specific prompts. Our code also supports Prefix-tuning with BART following the [original paper](https://arxiv.org/abs/2101.00190).
```python | 145_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-examples | .md | ```python
>>> from transformers import MvpForConditionalGeneration | 145_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-examples | .md | >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp", use_prompt=True)
>>> # the number of trainable parameters (full tuning)
>>> sum(p.numel() for p in model.parameters() if p.requires_grad)
468116832
>>> # lightweight tuning with randomly initialized prompts
>>> model.set_lightweight_tuning()
>>> # the number of trainable parameters (lightweight tuning)
>>> sum(p.numel() for p in model.parameters() if p.requires_grad)
61823328 | 145_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-examples | .md | >>> # lightweight tuning with task-specific prompts
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
>>> model.set_lightweight_tuning()
>>> # original lightweight Prefix-tuning
>>> model = MvpForConditionalGeneration.from_pretrained("facebook/bart-large", use_prompt=True)
>>> model.set_lightweight_tuning()
``` | 145_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#resources | .md | - [Text classification task guide](../tasks/sequence_classification)
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization) | 145_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | This is the configuration class to store the configuration of a [`MvpModel`]. It is used to instantiate a MVP model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the MVP [RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 145_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50267):
Vocabulary size of the MVP model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`MvpModel`].
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer. | 145_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 12):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16): | 145_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): | 145_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities. | 145_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for classifier.
max_position_embeddings (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large | 145_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details. | 145_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (`bool`, *optional*, defaults to `False`):
Scale embeddings by diving by sqrt(d_model).
use_cache (`bool`, *optional*, defaults to `True`): | 145_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | Scale embeddings by diving by sqrt(d_model).
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
forced_eos_token_id (`int`, *optional*, defaults to 2):
The id of the token to force as the last generated token when `max_length` is reached. Usually set to
`eos_token_id`.
use_prompt (`bool`, *optional*, defaults to `False`):
Whether or not to use prompt.
prompt_length (`int`, *optional*, defaults to 100): | 145_5_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | Whether or not to use prompt.
prompt_length (`int`, *optional*, defaults to 100):
The length of prompt.
prompt_mid_dim (`int`, *optional*, defaults to 800):
Dimensionality of the "intermediate" layer in prompt.
Example:
```python
>>> from transformers import MvpConfig, MvpModel | 145_5_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpconfig | .md | >>> # Initializing a MVP RUCAIBox/mvp style configuration
>>> configuration = MvpConfig()
>>> # Initializing a model (with random weights) from the RUCAIBox/mvp style configuration
>>> model = MvpModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 145_5_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizer | .md | Constructs a MVP tokenizer, which is smilar to the RoBERTa tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import MvpTokenizer
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2] | 145_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizer | .md | >>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip> | 145_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizer | .md | When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip>
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See | 145_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizer | .md | errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of | 145_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizer | .md | <Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`): | 145_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizer | .md | The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`): | 145_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizer | .md | token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 145_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizer | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 145_6_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizer | .md | modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (MVP tokenizer detect beginning of words by the preceding space). | 145_6_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizerfast | .md | Construct a "fast" MVP tokenizer (backed by HuggingFace's *tokenizers* library), derived from the GPT-2 tokenizer,
using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import MvpTokenizerFast | 145_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizerfast | .md | >>> tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2] | 145_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizerfast | .md | >>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip> | 145_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizerfast | .md | When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip>
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`): | 145_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizerfast | .md | Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip> | 145_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizerfast | .md | The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`. | 145_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizerfast | .md | The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`): | 145_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizerfast | .md | token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 145_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizerfast | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 145_7_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvptokenizerfast | .md | modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (MVP tokenizer detect beginning of words by the preceding space).
trim_offsets (`bool`, *optional*, defaults to `True`):
Whether the post processing step should trim offsets to avoid including whitespaces. | 145_7_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpmodel | .md | The bare MVP Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 145_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MvpConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 145_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 145_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpforconditionalgeneration | .md | The MVP Model with a language modeling head. Can be used for various text generation tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 145_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpforconditionalgeneration | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MvpConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 145_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpforconditionalgeneration | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 145_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpforsequenceclassification | .md | Mvp model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 145_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpforsequenceclassification | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MvpConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 145_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpforsequenceclassification | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 145_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpforquestionanswering | .md | MVP Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer
on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 145_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpforquestionanswering | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MvpConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not | 145_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpforquestionanswering | .md | Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 145_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#mvpforcausallm | .md | No docstring available for MvpForCausalLM
Methods: forward | 145_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 146_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 146_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbart-and-mbart-50 | .md | <div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=mbart">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-mbart-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/mbart-large-50-one-to-many-mmt">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div> | 146_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#overview-of-mbart | .md | The MBart model was presented in [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan
Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual
corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete | 146_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#overview-of-mbart | .md | corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete
sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only
on the encoder, decoder, or reconstructing parts of the text.
This model was contributed by [valhalla](https://huggingface.co/valhalla). The Authors' code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/mbart) | 146_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart | .md | MBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for translation task. As the
model is multilingual it expects the sequences in a different format. A special language id token is added in both the
source and target text. The source text format is `X [eos, src_lang_code]` where `X` is the source text. The
target text format is `[tgt_lang_code] X [eos]`. `bos` is never used. | 146_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart | .md | target text format is `[tgt_lang_code] X [eos]`. `bos` is never used.
The regular [`~MBartTokenizer.__call__`] will encode source text format passed as first argument or with the `text`
keyword, and target text format passed with the `text_label` keyword argument.
- Supervised training
```python
>>> from transformers import MBartForConditionalGeneration, MBartTokenizer | 146_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart | .md | >>> tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO")
>>> example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
>>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt") | 146_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart | .md | >>> model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro")
>>> # forward pass
>>> model(**inputs)
```
- Generation
While generating the target text set the `decoder_start_token_id` to the target language id. The following
example shows how to translate English to Romanian using the *facebook/mbart-large-en-ro* model.
```python
>>> from transformers import MBartForConditionalGeneration, MBartTokenizer | 146_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart | .md | >>> tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX")
>>> article = "UN Chief Says There Is No Military Solution in Syria"
>>> inputs = tokenizer(article, return_tensors="pt")
>>> translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"])
>>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"Şeful ONU declară că nu există o soluţie militară în Siria"
``` | 146_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#overview-of-mbart-50 | .md | MBart-50 was introduced in the [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav
Chaudhary, Jiatao Gu, Angela Fan. MBart-50 is created using the original *mbart-large-cc25* checkpoint by extending
its embedding layers with randomly initialized vectors for an extra set of 25 language tokens and then pretrained on 50
languages.
According to the abstract | 146_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#overview-of-mbart-50 | .md | languages.
According to the abstract
*Multilingual translation models can be created through multilingual finetuning. Instead of finetuning on one
direction, a pretrained model is finetuned on many directions at the same time. It demonstrates that pretrained models
can be extended to incorporate additional languages without loss of performance. Multilingual finetuning improves on
average 1 BLEU over the strongest baselines (being either multilingual from scratch or bilingual finetuning) while | 146_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#overview-of-mbart-50 | .md | average 1 BLEU over the strongest baselines (being either multilingual from scratch or bilingual finetuning) while
improving 9.3 BLEU on average over bilingual baselines from scratch.* | 146_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart-50 | .md | The text format for MBart-50 is slightly different from mBART. For MBart-50 the language id token is used as a prefix
for both source and target text i.e the text format is `[lang_code] X [eos]`, where `lang_code` is source
language id for source text and target language id for target text, with `X` being the source or target text
respectively.
MBart-50 has its own tokenizer [`MBart50Tokenizer`].
- Supervised training
```python | 146_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart-50 | .md | respectively.
MBart-50 has its own tokenizer [`MBart50Tokenizer`].
- Supervised training
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast | 146_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart-50 | .md | model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
src_text = " UN Chief Says There Is No Military Solution in Syria"
tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt") | 146_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart-50 | .md | model(**model_inputs) # forward pass
```
- Generation
To generate using the mBART-50 multilingual translation models, `eos_token_id` is used as the
`decoder_start_token_id` and the target language id is forced as the first generated token. To force the
target language id as the first generated token, pass the *forced_bos_token_id* parameter to the *generate* method.
The following example shows how to translate between Hindi to French and Arabic to English using the | 146_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart-50 | .md | The following example shows how to translate between Hindi to French and Arabic to English using the
*facebook/mbart-50-large-many-to-many* checkpoint.
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast | 146_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart-50 | .md | article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") | 146_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart-50 | .md | # translate Hindi to French
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria." | 146_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#training-of-mbart-50 | .md | # translate Arabic to English
tokenizer.src_lang = "ar_AR"
encoded_ar = tokenizer(article_ar, return_tensors="pt")
generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "The Secretary-General of the United Nations says there is no military solution in Syria."
``` | 146_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#documentation-resources | .md | - [Text classification task guide](../tasks/sequence_classification)
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization) | 146_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbartconfig | .md | This is the configuration class to store the configuration of a [`MBartModel`]. It is used to instantiate an MBART
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the MBART
[facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 146_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbartconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the MBART model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`MBartModel`] or [`TFMBartModel`].
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer. | 146_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbartconfig | .md | d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 12):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16): | 146_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbartconfig | .md | decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): | 146_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbartconfig | .md | activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities. | 146_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbartconfig | .md | attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for classifier.
max_position_embeddings (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large | 146_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbartconfig | .md | The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details. | 146_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbartconfig | .md | The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (`bool`, *optional*, defaults to `False`):
Scale embeddings by diving by sqrt(d_model).
use_cache (`bool`, *optional*, defaults to `True`): | 146_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbartconfig | .md | Scale embeddings by diving by sqrt(d_model).
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (`int`, *optional*, defaults to 2):
The id of the token to force as the last generated token when `max_length` is reached. Usually set to
`eos_token_id`.
Example:
```python
>>> from transformers import MBartConfig, MBartModel | 146_7_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbartconfig | .md | >>> # Initializing a MBART facebook/mbart-large-cc25 style configuration
>>> configuration = MBartConfig()
>>> # Initializing a model (with random weights) from the facebook/mbart-large-cc25 style configuration
>>> model = MBartModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 146_7_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbarttokenizer | .md | Construct an MBART tokenizer.
Adapted from [`RobertaTokenizer`] and [`XLNetTokenizer`]. Based on
[SentencePiece](https://github.com/google/sentencepiece).
The tokenization method is `<tokens> <eos> <language code>` for source language documents, and `<language code>
<tokens> <eos>` for target language documents.
Examples:
```python
>>> from transformers import MBartTokenizer | 146_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbarttokenizer | .md | >>> tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO")
>>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
>>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt")
```
Methods: build_inputs_with_special_tokens | 146_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbarttokenizerfast | .md | Construct a "fast" MBART tokenizer (backed by HuggingFace's *tokenizers* library). Based on
[BPE](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models).
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
The tokenization method is `<tokens> <eos> <language code>` for source language documents, and `<language code> | 146_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbarttokenizerfast | .md | The tokenization method is `<tokens> <eos> <language code>` for source language documents, and `<language code>
<tokens> <eos>` for target language documents.
Examples:
```python
>>> from transformers import MBartTokenizerFast | 146_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbarttokenizerfast | .md | >>> tokenizer = MBartTokenizerFast.from_pretrained(
... "facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO"
... )
>>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
>>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt")
``` | 146_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbart50tokenizer | .md | Construct a MBart50 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
src_lang (`str`, *optional*):
A string representing the source language.
tgt_lang (`str`, *optional*):
A string representing the target language. | 146_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbart50tokenizer | .md | A string representing the source language.
tgt_lang (`str`, *optional*):
A string representing the target language.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens. | 146_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbart50tokenizer | .md | token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 146_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbart50tokenizer | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 146_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbart50tokenizer | .md | modeling. This is the token which the model will try to predict.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed. | 146_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbart50tokenizer | .md | - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Examples:
```python | 146_10_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbart50tokenizer | .md | BPE-dropout.
Examples:
```python
>>> from transformers import MBart50Tokenizer | 146_10_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbart50tokenizer | .md | >>> tokenizer = MBart50Tokenizer.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
>>> src_text = " UN Chief Says There Is No Military Solution in Syria"
>>> tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
>>> model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
>>> # model(**model_inputs) should work
``` | 146_10_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mbart.md | https://huggingface.co/docs/transformers/en/model_doc/mbart/#mbart50tokenizerfast | .md | Construct a "fast" MBART tokenizer for mBART-50 (backed by HuggingFace's *tokenizers* library). Based on
[BPE](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models).
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
src_lang (`str`, *optional*): | 146_11_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.