source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
The epsilon used by the layer normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
use_parallel_residual (`bool`, *optional*, defaults to `True`):
Whether to use a "parallel" formulation in each Transformer layer, which can provide a slight training
speedup at large scales (e.g. 20B).
rope_scaling (`Dict`, *optional*):
|
243_11_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
speedup at large scales (e.g. 20B).
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'llama3'], with 'default' being the original RoPE implementation.
|
243_11_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
|
243_11_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
|
243_11_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
|
243_11_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
|
243_11_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
attention_bias (`bool`, *optional*, defaults to `True`):
|
243_11_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
attention_bias (`bool`, *optional*, defaults to `True`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
Example:
```python
>>> from transformers import GPTNeoXConfig, GPTNeoXModel
|
243_11_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
>>> # Initializing a GPTNeoX gpt-neox-20b style configuration
>>> configuration = GPTNeoXConfig()
>>> # Initializing a model (with random weights) from the gpt-neox-20b style configuration
>>> model = GPTNeoXModel(configuration) # doctest: +SKIP
>>> # Accessing the model configuration
>>> configuration = model.config # doctest: +SKIP
```
|
243_11_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxtokenizerfast
|
.md
|
Construct a "fast" GPT-NeoX-20B tokenizer (backed by HuggingFace's *tokenizers* library). Based on byte-level
Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import GPTNeoXTokenizerFast
|
243_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxtokenizerfast
|
.md
|
>>> tokenizer = GPTNeoXTokenizerFast.from_pretrained("openai-community/gpt2")
>>> tokenizer("Hello world")["input_ids"]
[15496, 995]
|
243_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxtokenizerfast
|
.md
|
>>> tokenizer(" Hello world")["input_ids"]
[18435, 995]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer, but since
the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip>
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
|
243_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxtokenizerfast
|
.md
|
</Tip>
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
|
243_12_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxtokenizerfast
|
.md
|
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
unk_token (`str`, *optional*, defaults to `<|endoftext|>`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str`, *optional*, defaults to `<|endoftext|>`):
The beginning of sequence token.
eos_token (`str`, *optional*, defaults to `<|endoftext|>`):
The end of sequence token.
pad_token (`str`, *optional*):
|
243_12_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxtokenizerfast
|
.md
|
eos_token (`str`, *optional*, defaults to `<|endoftext|>`):
The end of sequence token.
pad_token (`str`, *optional*):
Token for padding a sequence.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (GPTNeoX tokenizer detect beginning of words by the preceding space).
add_bos_token (`bool`, *optional*, defaults to `False`):
|
243_12_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxtokenizerfast
|
.md
|
add_bos_token (`bool`, *optional*, defaults to `False`):
Whether or not to add a `bos_token` at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether or not to add an `eos_token` at the end of sequences.
trim_offsets (`bool`, *optional*, defaults to `True`):
Whether or not the post-processing step should trim offsets to avoid including whitespaces.
|
243_12_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxmodel
|
.md
|
The bare GPTNeoX Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~GPTNeoXConfig`]): Model configuration class with all the parameters of the model.
|
243_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxmodel
|
.md
|
behavior.
Parameters:
config ([`~GPTNeoXConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
243_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxforcausallm
|
.md
|
GPTNeoX Model with a `language modeling` head on top for CLM fine-tuning.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~GPTNeoXConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
243_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxforcausallm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
243_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxforquestionanswering
|
.md
|
The GPT-NeoX Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
243_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxforquestionanswering
|
.md
|
behavior.
Parameters:
config ([`~GPTNeoXConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
243_15_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxforsequenceclassification
|
.md
|
The GPTNeoX Model transformer with a sequence classification head on top (linear layer).
[`GPTNeoXForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
243_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxforsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
|
243_16_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxforsequenceclassification
|
.md
|
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~GPTNeoXConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
243_16_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxforsequenceclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
243_16_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxfortokenclassification
|
.md
|
No docstring available for GPTNeoXForTokenClassification
Methods: forward
|
243_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
244_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
244_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#overview
|
.md
|
FSMT (FairSeq MachineTranslation) models were introduced in [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616) by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov.
The abstract of the paper is the following:
*This paper describes Facebook FAIR's submission to the WMT19 shared news translation task. We participate in two
language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from
|
244_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#overview
|
.md
|
language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from
last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling
toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes,
as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific
|
244_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#overview
|
.md
|
as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific
data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the
human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations.
This system improves upon our WMT'18 submission by 4.5 BLEU points.*
|
244_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#overview
|
.md
|
This system improves upon our WMT'18 submission by 4.5 BLEU points.*
This model was contributed by [stas](https://huggingface.co/stas). The original code can be found
[here](https://github.com/pytorch/fairseq/tree/master/examples/wmt19).
|
244_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#implementation-notes
|
.md
|
- FSMT uses source and target vocabulary pairs that aren't combined into one. It doesn't share embeddings tokens
either. Its tokenizer is very similar to [`XLMTokenizer`] and the main model is derived from
[`BartModel`].
|
244_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
This is the configuration class to store the configuration of a [`FSMTModel`]. It is used to instantiate a FSMT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the FSMT
[facebook/wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
244_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
langs (`List[str]`):
A list with source language and target_language (e.g., ['en', 'ru']).
src_vocab_size (`int`):
Vocabulary size of the encoder. Defines the number of different tokens that can be represented by the
`inputs_ids` passed to the forward method in the encoder.
tgt_vocab_size (`int`):
|
244_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
`inputs_ids` passed to the forward method in the encoder.
tgt_vocab_size (`int`):
Vocabulary size of the decoder. Defines the number of different tokens that can be represented by the
`inputs_ids` passed to the forward method in the decoder.
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 12):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
|
244_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
|
244_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `Callable`, *optional*, defaults to `"relu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
|
244_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (`int`, *optional*, defaults to 1024):
|
244_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
max_position_embeddings (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_embedding (`bool`, *optional*, defaults to `True`):
Scale embeddings by diving by sqrt(d_model).
bos_token_id (`int`, *optional*, defaults to 0)
|
244_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
Scale embeddings by diving by sqrt(d_model).
bos_token_id (`int`, *optional*, defaults to 0)
Beginning of stream token id.
pad_token_id (`int`, *optional*, defaults to 1)
Padding token id.
eos_token_id (`int`, *optional*, defaults to 2)
End of stream token id.
decoder_start_token_id (`int`, *optional*):
This model starts decoding with `eos_token_id`
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
Google "layerdrop arxiv", as its not explainable in one line.
|
244_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
Google "layerdrop arxiv", as its not explainable in one line.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
Google "layerdrop arxiv", as its not explainable in one line.
is_encoder_decoder (`bool`, *optional*, defaults to `True`):
Whether this is an encoder/decoder model.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie input and output embeddings.
num_beams (`int`, *optional*, defaults to 5)
|
244_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
Whether to tie input and output embeddings.
num_beams (`int`, *optional*, defaults to 5)
Number of beams for beam search that will be used by default in the `generate` method of the model. 1 means
no beam search.
length_penalty (`float`, *optional*, defaults to 1)
Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to
the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log
|
244_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log
likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while
`length_penalty` < 0.0 encourages shorter sequences.
early_stopping (`bool`, *optional*, defaults to `False`)
Flag that will be used by default in the `generate` method of the model. Whether to stop the beam search
when at least `num_beams` sentences are finished per batch or not.
|
244_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
when at least `num_beams` sentences are finished per batch or not.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
forced_eos_token_id (`int`, *optional*, defaults to 2):
The id of the token to force as the last generated token when `max_length` is reached. Usually set to
`eos_token_id`.
Examples:
```python
>>> from transformers import FSMTConfig, FSMTModel
|
244_3_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtconfig
|
.md
|
>>> # Initializing a FSMT facebook/wmt19-en-ru style configuration
>>> config = FSMTConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = FSMTModel(config)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
244_3_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmttokenizer
|
.md
|
Construct an FAIRSEQ Transformer tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following:
- Moses preprocessing and tokenization.
- Normalizing all inputs text.
- The arguments `special_tokens` and the function `set_special_tokens`, can be used to add additional symbols (like
"__classify__") to a vocabulary.
- The argument `langs` defines a pair of languages.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
|
244_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmttokenizer
|
.md
|
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
langs (`List[str]`, *optional*):
A list of two languages to translate from and to, for instance `["en", "ru"]`.
src_vocab_file (`str`, *optional*):
File containing the vocabulary for the source language.
tgt_vocab_file (`st`, *optional*):
File containing the vocabulary for the target language.
|
244_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmttokenizer
|
.md
|
tgt_vocab_file (`st`, *optional*):
File containing the vocabulary for the target language.
merges_file (`str`, *optional*):
File containing the merges.
do_lower_case (`bool`, *optional*, defaults to `False`):
Whether or not to lowercase the input when tokenizing.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str`, *optional*, defaults to `"<s>"`):
|
244_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmttokenizer
|
.md
|
token instead.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
244_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmttokenizer
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
|
244_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmttokenizer
|
.md
|
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
244_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtmodel
|
.md
|
The bare FSMT Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
244_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`FSMTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
244_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
244_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtforconditionalgeneration
|
.md
|
The FSMT Model with a language modeling head. Can be used for summarization.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
244_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`FSMTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
244_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fsmt.md
|
https://huggingface.co/docs/transformers/en/model_doc/fsmt/#fsmtforconditionalgeneration
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
244_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
245_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
245_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5
|
.md
|
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=mt5">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-mt5-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
|
245_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#overview
|
.md
|
The mT5 model was presented in [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya
Siddhant, Aditya Barua, Colin Raffel.
The abstract from the paper is the following:
*The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain
|
245_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#overview
|
.md
|
*The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain
state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a
multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail
the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual
|
245_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#overview
|
.md
|
the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual
benchmarks. We also describe a simple technique to prevent "accidental translation" in the zero-shot setting, where a
generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model
checkpoints used in this work are publicly available.*
|
245_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#overview
|
.md
|
checkpoints used in this work are publicly available.*
Note: mT5 was only pre-trained on [mC4](https://huggingface.co/datasets/mc4) excluding any supervised training.
Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model.
Since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
|
245_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#overview
|
.md
|
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
Google has released the following variants:
- [google/mt5-small](https://huggingface.co/google/mt5-small)
- [google/mt5-base](https://huggingface.co/google/mt5-base)
- [google/mt5-large](https://huggingface.co/google/mt5-large)
- [google/mt5-xl](https://huggingface.co/google/mt5-xl)
- [google/mt5-xxl](https://huggingface.co/google/mt5-xxl).
|
245_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#overview
|
.md
|
- [google/mt5-xl](https://huggingface.co/google/mt5-xl)
- [google/mt5-xxl](https://huggingface.co/google/mt5-xxl).
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be
found [here](https://github.com/google-research/multilingual-t5).
|
245_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#resources
|
.md
|
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization)
|
245_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5config
|
.md
|
This is the configuration class to store the configuration of a [`MT5Model`] or a [`TFMT5Model`]. It is used to
instantiate a mT5 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the mT5
[google/mt5-small](https://huggingface.co/google/mt5-small) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
245_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5config
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Arguments:
vocab_size (`int`, *optional*, defaults to 250112):
Vocabulary size of the T5 model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`T5Model`] or [`TFT5Model`].
d_model (`int`, *optional*, defaults to 512):
Size of the encoder layers and the pooler layer.
|
245_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5config
|
.md
|
d_model (`int`, *optional*, defaults to 512):
Size of the encoder layers and the pooler layer.
d_kv (`int`, *optional*, defaults to 64):
Size of the key, query, value projections per attention head. In the conventional context, it is typically expected that `d_kv` has to be equal to `d_model // num_heads`.
But in the architecture of mt5-small, `d_kv` is not equal to `d_model //num_heads`. The `inner_dim` of the projection layer will be defined as `num_heads * d_kv`.
|
245_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5config
|
.md
|
d_ff (`int`, *optional*, defaults to 1024):
Size of the intermediate feed forward layer in each `T5Block`.
num_layers (`int`, *optional*, defaults to 8):
Number of hidden layers in the Transformer encoder.
num_decoder_layers (`int`, *optional*):
Number of hidden layers in the Transformer decoder. Will use the same value as `num_layers` if not set.
num_heads (`int`, *optional*, defaults to 6):
Number of attention heads for each attention layer in the Transformer encoder.
|
245_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5config
|
.md
|
num_heads (`int`, *optional*, defaults to 6):
Number of attention heads for each attention layer in the Transformer encoder.
relative_attention_num_buckets (`int`, *optional*, defaults to 32):
The number of buckets to use for each attention layer.
relative_attention_max_distance (`int`, *optional*, defaults to 128):
The maximum distance of the longer sequences for the bucket separation.
dropout_rate (`float`, *optional*, defaults to 0.1):
The ratio for all dropout layers.
|
245_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5config
|
.md
|
dropout_rate (`float`, *optional*, defaults to 0.1):
The ratio for all dropout layers.
classifier_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for classifier.
layer_norm_eps (`float`, *optional*, defaults to 1e-6):
The epsilon used by the layer normalization layers.
initializer_factor (`float`, *optional*, defaults to 1):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
|
245_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5config
|
.md
|
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
feed_forward_proj (`string`, *optional*, defaults to `"gated-gelu"`):
Type of feed forward layer to be used. Should be one of `"relu"` or `"gated-gelu"`.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
|
245_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5tokenizer
|
.md
|
No docstring available for MT5Tokenizer
See [`T5Tokenizer`] for all details.
|
245_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5tokenizerfast
|
.md
|
No docstring available for MT5TokenizerFast
See [`T5TokenizerFast`] for all details.
<frameworkcontent>
<pt>
|
245_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5model
|
.md
|
The bare MT5 Model transformer outputting raw hidden-states without any specific head on top.
The MT5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
|
245_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5model
|
.md
|
text-to-text denoising generative setting.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
245_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5model
|
.md
|
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MT5Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Examples:
```python
>>> from transformers import MT5Model, AutoTokenizer
|
245_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5model
|
.md
|
>>> model = MT5Model.from_pretrained("google/mt5-small")
>>> tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
>>> article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
>>> summary = "Weiter Verhandlung in Syrien."
>>> inputs = tokenizer(article, return_tensors="pt")
>>> labels = tokenizer(text_target=summary, return_tensors="pt")
>>> outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=labels["input_ids"])
>>> hidden_states = outputs.last_hidden_state
```
|
245_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5forconditionalgeneration
|
.md
|
MT5 Model with a `language modeling` head on top.
The MT5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
|
245_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5forconditionalgeneration
|
.md
|
text-to-text denoising generative setting.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
245_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5forconditionalgeneration
|
.md
|
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MT5Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Examples:
```python
|
245_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5forconditionalgeneration
|
.md
|
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Examples:
```python
>>> from transformers import MT5ForConditionalGeneration, AutoTokenizer
|
245_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5forconditionalgeneration
|
.md
|
>>> model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small")
>>> tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
>>> article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
>>> summary = "Weiter Verhandlung in Syrien."
>>> inputs = tokenizer(article, text_target=summary, return_tensors="pt")
>>> outputs = model(**inputs)
>>> loss = outputs.loss
```
|
245_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5encodermodel
|
.md
|
The bare MT5 Model transformer outputting encoder's raw hidden-states without any specific head on top.
The MT5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
|
245_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5encodermodel
|
.md
|
text-to-text denoising generative setting.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
245_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5encodermodel
|
.md
|
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MT5Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Examples:
```python
>>> from transformers import MT5EncoderModel, AutoTokenizer
|
245_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5encodermodel
|
.md
|
>>> model = MT5EncoderModel.from_pretrained("google/mt5-small")
>>> tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
>>> article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
>>> input_ids = tokenizer(article, return_tensors="pt").input_ids
>>> outputs = model(input_ids)
>>> hidden_state = outputs.last_hidden_state
```
|
245_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5forsequenceclassification
|
.md
|
MT5 model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
tasks.
The MT5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
|
245_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5forsequenceclassification
|
.md
|
text-to-text denoising generative setting.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
245_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5forsequenceclassification
|
.md
|
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MT5Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
245_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5fortokenclassification
|
.md
|
MT5 Encoder Model with a token classification head on top (a linear layer on top of the hidden-states output)
e.g. for Named-Entity-Recognition (NER) tasks.
The MT5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a
|
245_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5fortokenclassification
|
.md
|
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
245_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5fortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MT5Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
245_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5fortokenclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
245_11_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5forquestionanswering
|
.md
|
MT5 Model with a span classification head on top for extractive question-answering tasks like SQuAD (linear layers
on top of the hidden-states output to compute `span start logits` and `span end logits`).
The MT5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
|
245_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mt5.md
|
https://huggingface.co/docs/transformers/en/model_doc/mt5/#mt5forquestionanswering
|
.md
|
Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
245_12_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.