source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2config
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
https://arxiv.org/abs/1909.11556>`__ for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
|
206_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2config
|
.md
|
https://arxiv.org/abs/1909.11556>`__ for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
max_target_positions (`int`, *optional*, defaults to 1024):
|
206_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2config
|
.md
|
max_target_positions (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
Example:
```python
>>> from transformers import Speech2Text2Config, Speech2Text2ForCausalLM
|
206_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2config
|
.md
|
>>> # Initializing a Speech2Text2 s2t_transformer_s style configuration
>>> configuration = Speech2Text2Config()
>>> # Initializing a model (with random weights) from the s2t_transformer_s style configuration
>>> model = Speech2Text2ForCausalLM(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
206_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2texttokenizer
|
.md
|
Constructs a Speech2Text2Tokenizer.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains some of the main methods. Users should refer to
the superclass for more information regarding such methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sentence token.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sentence token.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
|
206_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2texttokenizer
|
.md
|
The end of sentence token.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
**kwargs
Additional keyword arguments passed along to [`PreTrainedTokenizer`]
Methods: batch_decode
- decode
- save_vocabulary
|
206_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2processor
|
.md
|
Constructs a Speech2Text2 processor which wraps a Speech2Text2 feature extractor and a Speech2Text2 tokenizer into
a single processor.
[`Speech2Text2Processor`] offers all the functionalities of [`AutoFeatureExtractor`] and [`Speech2Text2Tokenizer`].
See the [`~Speech2Text2Processor.__call__`] and [`~Speech2Text2Processor.decode`] for more information.
Args:
feature_extractor (`AutoFeatureExtractor`):
An instance of [`AutoFeatureExtractor`]. The feature extractor is a required input.
|
206_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2processor
|
.md
|
feature_extractor (`AutoFeatureExtractor`):
An instance of [`AutoFeatureExtractor`]. The feature extractor is a required input.
tokenizer (`Speech2Text2Tokenizer`):
An instance of [`Speech2Text2Tokenizer`]. The tokenizer is a required input.
Methods: __call__
- from_pretrained
- save_pretrained
- batch_decode
- decode
|
206_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2forcausallm
|
.md
|
The Speech2Text2 Decoder with a language modeling head. Can be used as the decoder part of [`EncoderDecoderModel`] and [`SpeechEncoderDecoder`].
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
206_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2forcausallm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Speech2Text2Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
206_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2forcausallm
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
206_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
207_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
207_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#overview
|
.md
|
LayoutXLM was proposed in [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha
Zhang, Furu Wei. It's a multilingual extension of the [LayoutLMv2 model](https://arxiv.org/abs/2012.14740) trained
on 53 languages.
The abstract from the paper is the following:
|
207_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#overview
|
.md
|
on 53 languages.
The abstract from the paper is the following:
*Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually-rich document
understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. In
this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to
|
207_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#overview
|
.md
|
this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to
bridge the language barriers for visually-rich document understanding. To accurately evaluate LayoutXLM, we also
introduce a multilingual form understanding benchmark dataset named XFUN, which includes form understanding samples in
7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled
|
207_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#overview
|
.md
|
7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled
for each language. Experiment results show that the LayoutXLM model has significantly outperformed the existing SOTA
cross-lingual pre-trained models on the XFUN dataset.*
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/unilm).
|
207_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#usage-tips-and-examples
|
.md
|
One can directly plug in the weights of LayoutXLM into a LayoutLMv2 model, like so:
```python
from transformers import LayoutLMv2Model
model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base")
```
Note that LayoutXLM has its own tokenizer, based on
[`LayoutXLMTokenizer`]/[`LayoutXLMTokenizerFast`]. You can initialize it as
follows:
```python
from transformers import LayoutXLMTokenizer
|
207_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#usage-tips-and-examples
|
.md
|
tokenizer = LayoutXLMTokenizer.from_pretrained("microsoft/layoutxlm-base")
```
Similar to LayoutLMv2, you can use [`LayoutXLMProcessor`] (which internally applies
[`LayoutLMv2ImageProcessor`] and
[`LayoutXLMTokenizer`]/[`LayoutXLMTokenizerFast`] in sequence) to prepare all
data for the model.
<Tip>
As LayoutXLM's architecture is equivalent to that of LayoutLMv2, one can refer to [LayoutLMv2's documentation page](layoutlmv2) for all tips, code examples and notebooks.
</Tip>
|
207_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
Adapted from [`RobertaTokenizer`] and [`XLNetTokenizer`]. Based on
[SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
bos_token (`str`, *optional*, defaults to `"<s>"`):
|
207_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
Args:
vocab_file (`str`):
Path to the vocabulary file.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
|
207_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
207_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
207_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
207_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
cls_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [CLS] token.
|
207_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
cls_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [CLS] token.
sep_token_box (`List[int]`, *optional*, defaults to `[1000, 1000, 1000, 1000]`):
The bounding box to use for the special [SEP] token.
pad_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [PAD] token.
pad_token_label (`int`, *optional*, defaults to -100):
|
207_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
The bounding box to use for the special [PAD] token.
pad_token_label (`int`, *optional*, defaults to -100):
The label to use for padding tokens. Defaults to -100, which is the `ignore_index` of PyTorch's
CrossEntropyLoss.
only_label_first_subword (`bool`, *optional*, defaults to `True`):
Whether or not to only label the first subword, in case word labels are provided.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
|
207_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
|
207_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Attributes:
sp_model (`SentencePieceProcessor`):
|
207_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizer
|
.md
|
BPE-dropout.
Attributes:
sp_model (`SentencePieceProcessor`):
The *SentencePiece* processor that is used for every conversion (string, tokens and IDs).
Methods: __call__
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
207_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizerfast
|
.md
|
Construct a "fast" LayoutXLM tokenizer (backed by HuggingFace's *tokenizers* library). Adapted from
[`RobertaTokenizer`] and [`XLNetTokenizer`]. Based on
[BPE](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models).
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
|
207_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizerfast
|
.md
|
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
|
207_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizerfast
|
.md
|
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
207_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizerfast
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
207_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizerfast
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
207_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizerfast
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
cls_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [CLS] token.
|
207_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizerfast
|
.md
|
cls_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [CLS] token.
sep_token_box (`List[int]`, *optional*, defaults to `[1000, 1000, 1000, 1000]`):
The bounding box to use for the special [SEP] token.
pad_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [PAD] token.
pad_token_label (`int`, *optional*, defaults to -100):
|
207_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizerfast
|
.md
|
The bounding box to use for the special [PAD] token.
pad_token_label (`int`, *optional*, defaults to -100):
The label to use for padding tokens. Defaults to -100, which is the `ignore_index` of PyTorch's
CrossEntropyLoss.
only_label_first_subword (`bool`, *optional*, defaults to `True`):
Whether or not to only label the first subword, in case word labels are provided.
additional_special_tokens (`List[str]`, *optional*, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`):
|
207_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmtokenizerfast
|
.md
|
additional_special_tokens (`List[str]`, *optional*, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`):
Additional special tokens used by the tokenizer.
Methods: __call__
|
207_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmprocessor
|
.md
|
Constructs a LayoutXLM processor which combines a LayoutXLM image processor and a LayoutXLM tokenizer into a single
processor.
[`LayoutXLMProcessor`] offers all the functionalities you need to prepare data for the model.
It first uses [`LayoutLMv2ImageProcessor`] to resize document images to a fixed size, and optionally applies OCR to
get words and normalized bounding boxes. These are then provided to [`LayoutXLMTokenizer`] or
|
207_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmprocessor
|
.md
|
get words and normalized bounding boxes. These are then provided to [`LayoutXLMTokenizer`] or
[`LayoutXLMTokenizerFast`], which turns the words and bounding boxes into token-level `input_ids`,
`attention_mask`, `token_type_ids`, `bbox`. Optionally, one can provide integer `word_labels`, which are turned
into token-level `labels` for token classification tasks (such as FUNSD, CORD).
Args:
image_processor (`LayoutLMv2ImageProcessor`, *optional*):
|
207_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutxlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/layoutxlm/#layoutxlmprocessor
|
.md
|
Args:
image_processor (`LayoutLMv2ImageProcessor`, *optional*):
An instance of [`LayoutLMv2ImageProcessor`]. The image processor is a required input.
tokenizer (`LayoutXLMTokenizer` or `LayoutXLMTokenizerFast`, *optional*):
An instance of [`LayoutXLMTokenizer`] or [`LayoutXLMTokenizerFast`]. The tokenizer is a required input.
Methods: __call__
|
207_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/
|
.md
|
<!--Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
208_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/
|
.md
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
208_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#overview
|
.md
|
The MegatronBERT model was proposed in [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
Jared Casper and Bryan Catanzaro.
The abstract from the paper is the following:
*Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
|
208_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#overview
|
.md
|
*Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
|
208_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#overview
|
.md
|
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
|
208_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#overview
|
.md
|
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
|
208_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#overview
|
.md
|
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
|
208_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#overview
|
.md
|
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
of 89.4%).*
This model was contributed by [jdemouth](https://huggingface.co/jdemouth). The original code can be found [here](https://github.com/NVIDIA/Megatron-LM).
|
208_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#overview
|
.md
|
That repository contains a multi-GPU and multi-node implementation of the Megatron Language models. In particular,
it contains a hybrid model parallel approach using "tensor parallel" and "pipeline parallel" techniques.
|
208_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#usage-tips
|
.md
|
We have provided pretrained [BERT-345M](https://ngc.nvidia.com/catalog/models/nvidia:megatron_bert_345m) checkpoints
for use to evaluate or finetuning downstream tasks.
To access these checkpoints, first [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU Cloud (NGC)
Registry CLI. Further documentation for downloading models can be found in the [NGC documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1).
|
208_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#usage-tips
|
.md
|
Alternatively, you can directly download the checkpoints using:
BERT-345M-uncased:
```bash
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip
-O megatron_bert_345m_v0_1_uncased.zip
```
BERT-345M-cased:
```bash
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O
megatron_bert_345m_v0_1_cased.zip
```
|
208_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#usage-tips
|
.md
|
megatron_bert_345m_v0_1_cased.zip
```
Once you have obtained the checkpoints from NVIDIA GPU Cloud (NGC), you have to convert them to a format that will
easily be loaded by Hugging Face Transformers and our port of the BERT code.
The following commands allow you to do the conversion. We assume that the folder `models/megatron_bert` contains
`megatron_bert_345m_v0_1_{cased, uncased}.zip` and that the commands are run from inside that folder:
```bash
|
208_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#usage-tips
|
.md
|
`megatron_bert_345m_v0_1_{cased, uncased}.zip` and that the commands are run from inside that folder:
```bash
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_uncased.zip
```
```bash
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_cased.zip
```
|
208_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#resources
|
.md
|
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
|
208_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertconfig
|
.md
|
This is the configuration class to store the configuration of a [`MegatronBertModel`]. It is used to instantiate a
MEGATRON_BERT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MEGATRON_BERT
[nvidia/megatron-bert-uncased-345m](https://huggingface.co/nvidia/megatron-bert-uncased-345m) architecture.
|
208_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertconfig
|
.md
|
[nvidia/megatron-bert-uncased-345m](https://huggingface.co/nvidia/megatron-bert-uncased-345m) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 29056):
Vocabulary size of the MEGATRON_BERT model. Defines the number of different tokens that can be represented
by the `inputs_ids` passed when calling [`MegatronBertModel`].
|
208_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertconfig
|
.md
|
by the `inputs_ids` passed when calling [`MegatronBertModel`].
hidden_size (`int`, *optional*, defaults to 1024):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 4096):
|
208_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertconfig
|
.md
|
intermediate_size (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
|
208_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
208_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`MegatronBertModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
|
208_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertconfig
|
.md
|
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
|
208_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertconfig
|
.md
|
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
is_decoder (`bool`, *optional*, defaults to `False`):
Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
use_cache (`bool`, *optional*, defaults to `True`):
|
208_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertconfig
|
.md
|
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
Examples:
```python
>>> from transformers import MegatronBertConfig, MegatronBertModel
|
208_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertconfig
|
.md
|
>>> # Initializing a MEGATRON_BERT google-bert/bert-base-uncased style configuration
>>> configuration = MegatronBertConfig()
>>> # Initializing a model (with random weights) from the google-bert/bert-base-uncased style configuration
>>> model = MegatronBertModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
208_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertmodel
|
.md
|
The bare MegatronBert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
208_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MegatronBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
208_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in [Attention is
|
208_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertmodel
|
.md
|
cross-attention is added between the self-attention layers, following the architecture described in [Attention is
all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
|
208_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertmodel
|
.md
|
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
`add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
Methods: forward
|
208_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertformaskedlm
|
.md
|
MegatronBert Model with a `language modeling` head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
208_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertformaskedlm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MegatronBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
208_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertformaskedlm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
208_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforcausallm
|
.md
|
MegatronBert Model with a `language modeling` head on top for CLM fine-tuning.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
208_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforcausallm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MegatronBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
208_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforcausallm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
208_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertfornextsentenceprediction
|
.md
|
MegatronBert Model with a `next sentence prediction (classification)` head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
208_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertfornextsentenceprediction
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MegatronBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
208_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertfornextsentenceprediction
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
208_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforpretraining
|
.md
|
MegatronBert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a
`next sentence prediction (classification)` head.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
208_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforpretraining
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MegatronBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
208_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforpretraining
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
208_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforsequenceclassification
|
.md
|
MegatronBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
208_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforsequenceclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MegatronBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
208_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforsequenceclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
208_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertformultiplechoice
|
.md
|
MegatronBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output
and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
208_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertformultiplechoice
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MegatronBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
208_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertformultiplechoice
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
208_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertfortokenclassification
|
.md
|
MegatronBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
208_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertfortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MegatronBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
208_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertfortokenclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
208_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforquestionanswering
|
.md
|
MegatronBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
208_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MegatronBertConfig`]): Model configuration class with all the parameters of the model.
|
208_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron-bert.md
|
https://huggingface.co/docs/transformers/en/model_doc/megatron-bert/#megatronbertforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`MegatronBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
208_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/xlm-prophetnet/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
209_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/xlm-prophetnet/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
209_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/xlm-prophetnet/#xlm-prophetnet
|
.md
|
<Tip warning={true}>
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2.
You can do so by running the following command: `pip install -U transformers==4.40.2`.
</Tip>
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=xprophetnet">
|
209_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/xlm-prophetnet/#xlm-prophetnet
|
.md
|
</Tip>
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=xprophetnet">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-xprophetnet-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/xprophetnet-large-wiki100-cased-xglue-ntg">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
|
209_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/xlm-prophetnet/#xlm-prophetnet
|
.md
|
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
**DISCLAIMER:** If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title) and assign
@patrickvonplaten
|
209_1_2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.