source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertconfig
.md
`inputs_ids` passed when calling [`BertModel`] or [`TFBertModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072):
263_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertconfig
.md
intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
263_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertconfig
.md
`"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large
263_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertconfig
.md
The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`BertModel`] or [`TFBertModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12):
263_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertconfig
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
263_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertconfig
.md
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`):
263_8_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertconfig
.md
use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. Examples: ```python >>> from transformers import BertConfig, BertModel
263_8_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertconfig
.md
>>> # Initializing a BERT google-bert/bert-base-uncased style configuration >>> configuration = BertConfig() >>> # Initializing a model (with random weights) from the google-bert/bert-base-uncased style configuration >>> model = BertModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` Methods: all
263_8_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizer
.md
Construct a BERT tokenizer. Based on WordPiece. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): File containing the vocabulary. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`):
263_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizer
.md
Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`): Whether or not to do basic tokenization before WordPiece. never_split (`Iterable`, *optional*): Collection of tokens which will never be split during tokenization. Only has an effect when `do_basic_tokenize=True` unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
263_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizer
.md
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`):
263_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizer
.md
token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
263_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizer
.md
instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
263_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizer
.md
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this [issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT). clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
263_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizer
.md
value for `lowercase` (as in the original BERT). clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`): Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces. Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary <frameworkcontent> <pt>
263_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizerfast
.md
Construct a "fast" BERT tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece. This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): File containing the vocabulary. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. unk_token (`str`, *optional*, defaults to `"[UNK]"`):
263_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizerfast
.md
Whether or not to lowercase the input when tokenizing. unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last
263_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizerfast
.md
sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence
263_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizerfast
.md
The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. clean_text (`bool`, *optional*, defaults to `True`):
263_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizerfast
.md
modeling. This is the token which the model will try to predict. clean_text (`bool`, *optional*, defaults to `True`): Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this issue](https://github.com/huggingface/transformers/issues/328)).
263_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#berttokenizerfast
.md
issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT). wordpieces_prefix (`str`, *optional*, defaults to `"##"`): The prefix for subwords. </pt> <tf>
263_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#tfberttokenizer
.md
No docstring available for TFBertTokenizer </tf> </frameworkcontent>
263_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bert-specific-outputs
.md
models.bert.modeling_bert.BertForPreTrainingOutput Output type of [`BertForPreTraining`]. Args: loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
263_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bert-specific-outputs
.md
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). seq_relationship_logits (`torch.FloatTensor` of shape `(batch_size, 2)`): Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
263_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bert-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
263_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bert-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. [[autodoc]] models.bert.modeling_tf_bert.TFBertForPreTrainingOutput: modeling_tf_bert requires the TensorFlow library but it was not found in your environment. However, we were able to find a PyTorch installation. PyTorch classes do not begin
263_12_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bert-specific-outputs
.md
However, we were able to find a PyTorch installation. PyTorch classes do not begin with "TF", but are otherwise identically named to our TF classes. If you want to use PyTorch, please use those classes instead! If you really do want to use TensorFlow, please follow the instructions on the installation page https://www.tensorflow.org/install that match your environment. [[autodoc]] models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput:
263_12_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bert-specific-outputs
.md
[[autodoc]] models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput: modeling_flax_bert requires the FLAX library but it was not found in your environment. Checkout the instructions on the installation page: https://github.com/google/flax and follow the ones that match your environment. Please note that you may need to restart your runtime after installation. <frameworkcontent> <pt>
263_12_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertmodel
.md
The bare Bert Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
263_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
263_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in [Attention is
263_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertmodel
.md
cross-attention is added between the self-attention layers, following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
263_13_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertmodel
.md
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. Methods: forward
263_13_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertforpretraining
.md
Bert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a `next sentence prediction (classification)` head. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
263_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertforpretraining
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
263_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertforpretraining
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
263_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertlmheadmodel
.md
Bert Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
263_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertlmheadmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
263_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertlmheadmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
263_15_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertformaskedlm
.md
Bert Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
263_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertformaskedlm
.md
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
263_16_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertfornextsentenceprediction
.md
Bert Model with a `next sentence prediction (classification)` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
263_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertfornextsentenceprediction
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
263_17_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertfornextsentenceprediction
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
263_17_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertforsequenceclassification
.md
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
263_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
263_18_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertforsequenceclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
263_18_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertformultiplechoice
.md
Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
263_19_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertformultiplechoice
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
263_19_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertformultiplechoice
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
263_19_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertfortokenclassification
.md
Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
263_20_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertfortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
263_20_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertfortokenclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
263_20_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertforquestionanswering
.md
Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
263_21_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertConfig`]): Model configuration class with all the parameters of the model.
263_21_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertforquestionanswering
.md
and behavior. Parameters: config ([`BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
263_21_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#tfbertmodel
.md
No docstring available for TFBertModel Methods: call
263_22_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#tfbertforpretraining
.md
No docstring available for TFBertForPreTraining Methods: call
263_23_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#tfbertmodellmheadmodel
.md
No docstring available for TFBertLMHeadModel Methods: call
263_24_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#tfbertformaskedlm
.md
No docstring available for TFBertForMaskedLM Methods: call
263_25_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#tfbertfornextsentenceprediction
.md
No docstring available for TFBertForNextSentencePrediction Methods: call
263_26_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#tfbertforsequenceclassification
.md
No docstring available for TFBertForSequenceClassification Methods: call
263_27_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#tfbertformultiplechoice
.md
No docstring available for TFBertForMultipleChoice Methods: call
263_28_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#tfbertfortokenclassification
.md
No docstring available for TFBertForTokenClassification Methods: call
263_29_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#tfbertforquestionanswering
.md
No docstring available for TFBertForQuestionAnswering Methods: call </tf> <jax>
263_30_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#flaxbertmodel
.md
No docstring available for FlaxBertModel Methods: __call__
263_31_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#flaxbertforpretraining
.md
No docstring available for FlaxBertForPreTraining Methods: __call__
263_32_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#flaxbertforcausallm
.md
No docstring available for FlaxBertForCausalLM Methods: __call__
263_33_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#flaxbertformaskedlm
.md
No docstring available for FlaxBertForMaskedLM Methods: __call__
263_34_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#flaxbertfornextsentenceprediction
.md
No docstring available for FlaxBertForNextSentencePrediction Methods: __call__
263_35_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#flaxbertforsequenceclassification
.md
No docstring available for FlaxBertForSequenceClassification Methods: __call__
263_36_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#flaxbertformultiplechoice
.md
No docstring available for FlaxBertForMultipleChoice Methods: __call__
263_37_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#flaxbertfortokenclassification
.md
No docstring available for FlaxBertForTokenClassification Methods: __call__
263_38_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#flaxbertforquestionanswering
.md
No docstring available for FlaxBertForQuestionAnswering Methods: __call__ </jax> </frameworkcontent>
263_39_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
264_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
264_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5
.md
<div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=umt5"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-mt5-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div>
264_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#overview
.md
The UMT5 model was proposed in [UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi) by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant. The abstract from the paper is the following:
264_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#overview
.md
*Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus.
264_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#overview
.md
while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion
264_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#overview
.md
As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.*
264_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#overview
.md
Google has released the following variants: - [google/umt5-small](https://huggingface.co/google/umt5-small) - [google/umt5-base](https://huggingface.co/google/umt5-base) - [google/umt5-xl](https://huggingface.co/google/umt5-xl) - [google/umt5-xxl](https://huggingface.co/google/umt5-xxl). This model was contributed by [agemagician](https://huggingface.co/agemagician) and [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/google-research/t5x).
264_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#usage-tips
.md
- UMT5 was only pre-trained on [mC4](https://huggingface.co/datasets/mc4) excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. - Since umT5 was pre-trained in an unsupervised manner, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
264_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#differences-with-mt5
.md
`UmT5` is based on mT5, with a non-shared relative positional bias that is computed for each layer. This means that the model set `has_relative_bias` for each layer. The conversion script is also different because the model was saved in t5x's latest checkpointing format.
264_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#sample-usage
.md
```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/umt5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
264_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#sample-usage
.md
>>> inputs = tokenizer( ... "A <extra_id_0> walks into a bar and orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>.", ... return_tensors="pt", ... ) >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs)) ['<pad><extra_id_0>nyone who<extra_id_1> drink<extra_id_2> a<extra_id_3> alcohol<extra_id_4> A<extra_id_5> A. This<extra_id_6> I<extra_id_7><extra_id_52><extra_id_53></s>'] ``` <Tip>
264_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#sample-usage
.md
``` <Tip> Refer to [T5's documentation page](t5) for more tips, code examples and notebooks. </Tip>
264_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5config
.md
This is the configuration class to store the configuration of a [`UMT5Model`]. It is used to instantiate a UMT5 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the UMT5 [google/umt5-small](https://huggingface.co/google/umt5-small) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
264_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5config
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Arguments: vocab_size (`int`, *optional*, defaults to 250112): Vocabulary size of the UMT5 model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`UMT5Model`] or [`TFUMT5Model`]. d_model (`int`, *optional*, defaults to 512): Size of the encoder layers and the pooler layer.
264_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5config
.md
d_model (`int`, *optional*, defaults to 512): Size of the encoder layers and the pooler layer. d_kv (`int`, *optional*, defaults to 64): Size of the key, query, value projections per attention head. `d_kv` has to be equal to `d_model // num_heads`. d_ff (`int`, *optional*, defaults to 1024): Size of the intermediate feed forward layer in each `UMT5Block`. num_layers (`int`, *optional*, defaults to 8): Number of hidden layers in the Transformer encoder. num_decoder_layers (`int`, *optional*):
264_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5config
.md
Number of hidden layers in the Transformer encoder. num_decoder_layers (`int`, *optional*): Number of hidden layers in the Transformer decoder. Will use the same value as `num_layers` if not set. num_heads (`int`, *optional*, defaults to 6): Number of attention heads for each attention layer in the Transformer encoder. relative_attention_num_buckets (`int`, *optional*, defaults to 32): The number of buckets to use for each attention layer.
264_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5config
.md
relative_attention_num_buckets (`int`, *optional*, defaults to 32): The number of buckets to use for each attention layer. relative_attention_max_distance (`int`, *optional*, defaults to 128): The maximum distance of the longer sequences for the bucket separation. dropout_rate (`float`, *optional*, defaults to 0.1): The ratio for all dropout layers. classifier_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for classifier. layer_norm_eps (`float`, *optional*, defaults to 1e-6):
264_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5config
.md
The dropout ratio for classifier. layer_norm_eps (`float`, *optional*, defaults to 1e-6): The epsilon used by the layer normalization layers. initializer_factor (`float`, *optional*, defaults to 1): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). feed_forward_proj (`string`, *optional*, defaults to `"gated-gelu"`): Type of feed forward layer to be used. Should be one of `"relu"` or `"gated-gelu"`.
264_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5config
.md
Type of feed forward layer to be used. Should be one of `"relu"` or `"gated-gelu"`. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models).
264_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5model
.md
The bare UMT5 Model transformer outputting raw hidden-states without any specific head on top. The UMT5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a text-to-text denoising generative setting.
264_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5model
.md
text-to-text denoising generative setting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
264_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5model
.md
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`UMT5Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Examples: ```python >>> from transformers import UMT5Model, AutoTokenizer
264_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5model
.md
>>> model = UMT5Model.from_pretrained("google/umt5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/umt5-small") >>> noisy_text = "UN Offizier sagt, dass weiter <extra_id_0> werden muss in Syrien." >>> label = "<extra_id_0> verhandelt" >>> inputs = tokenizer(inputs, return_tensors="pt") >>> labels = tokenizer(label=label, return_tensors="pt")
264_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5model
.md
>>> outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=labels["input_ids"]) >>> hidden_states = outputs.last_hidden_state ``` Methods: forward
264_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/umt5.md
https://huggingface.co/docs/transformers/en/model_doc/umt5/#umt5forconditionalgeneration
.md
UMT5 Model with a `language modeling` head on top. The UMT5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a text-to-text denoising generative setting.
264_8_0