source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioforconditionalgeneration
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
214_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
215_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
215_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#overview
|
.md
|
The MPNet model was proposed in [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
MPNet adopts a novel pre-training method, named masked and permuted language modeling, to inherit the advantages of
masked language modeling and permuted language modeling for natural language understanding.
The abstract from the paper is the following:
|
215_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#overview
|
.md
|
The abstract from the paper is the following:
*BERT adopts masked language modeling (MLM) for pre-training and is one of the most successful pre-training models.
Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for
pre-training to address this problem. However, XLNet does not leverage the full position information of a sentence and
thus suffers from position discrepancy between pre-training and fine-tuning. In this paper, we propose MPNet, a novel
|
215_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#overview
|
.md
|
thus suffers from position discrepancy between pre-training and fine-tuning. In this paper, we propose MPNet, a novel
pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the
dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position
information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in
|
215_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#overview
|
.md
|
information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in
XLNet). We pre-train MPNet on a large-scale dataset (over 160GB text corpora) and fine-tune on a variety of
down-streaming tasks (GLUE, SQuAD, etc). Experimental results show that MPNet outperforms MLM and PLM by a large
margin, and achieves better results on these tasks compared with previous state-of-the-art pre-trained methods (e.g.,
BERT, XLNet, RoBERTa) under the same model setting.*
|
215_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#overview
|
.md
|
BERT, XLNet, RoBERTa) under the same model setting.*
The original code can be found [here](https://github.com/microsoft/MPNet).
|
215_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#usage-tips
|
.md
|
MPNet doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just
separate your segments with the separation token `tokenizer.sep_token` (or `[sep]`).
|
215_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#resources
|
.md
|
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
|
215_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetconfig
|
.md
|
This is the configuration class to store the configuration of a [`MPNetModel`] or a [`TFMPNetModel`]. It is used to
instantiate a MPNet model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MPNet
[microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
215_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30527):
Vocabulary size of the MPNet model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`MPNetModel`] or [`TFMPNetModel`].
hidden_size (`int`, *optional*, defaults to 768):
|
215_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetconfig
|
.md
|
`inputs_ids` passed when calling [`MPNetModel`] or [`TFMPNetModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
|
215_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetconfig
|
.md
|
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
|
215_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
215_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
relative_attention_num_buckets (`int`, *optional*, defaults to 32):
|
215_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetconfig
|
.md
|
The epsilon used by the layer normalization layers.
relative_attention_num_buckets (`int`, *optional*, defaults to 32):
The number of buckets to use for each attention layer.
Examples:
```python
>>> from transformers import MPNetModel, MPNetConfig
|
215_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetconfig
|
.md
|
>>> # Initializing a MPNet mpnet-base style configuration
>>> configuration = MPNetConfig()
>>> # Initializing a model from the mpnet-base style configuration
>>> model = MPNetModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
215_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizer
|
.md
|
This tokenizer inherits from [`BertTokenizer`] which contains most of the methods. Users should refer to the
superclass for more information regarding methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (`bool`, *optional*, defaults to `True`):
Whether or not to do basic tokenization before WordPiece.
never_split (`Iterable`, *optional*):
|
215_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizer
|
.md
|
Whether or not to do basic tokenization before WordPiece.
never_split (`Iterable`, *optional*):
Collection of tokens which will never be split during tokenization. Only has an effect when
`do_basic_tokenize=True`
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pre-training. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
|
215_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizer
|
.md
|
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
|
215_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizer
|
.md
|
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
|
215_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizer
|
.md
|
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
215_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizer
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
|
215_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizer
|
.md
|
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
[issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original BERT).
|
215_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizer
|
.md
|
value for `lowercase` (as in the original BERT).
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
215_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizerfast
|
.md
|
Construct a "fast" MPNet tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece.
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
bos_token (`str`, *optional*, defaults to `"<s>"`):
|
215_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizerfast
|
.md
|
Whether or not to lowercase the input when tokenizing.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
|
215_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizerfast
|
.md
|
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
215_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizerfast
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
215_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizerfast
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
215_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizerfast
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
|
215_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnettokenizerfast
|
.md
|
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this
issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original BERT).
<frameworkcontent>
<pt>
|
215_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetmodel
|
.md
|
The bare MPNet Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
215_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MPNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
215_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
215_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetformaskedlm
|
.md
|
No docstring available for MPNetForMaskedLM
Methods: forward
|
215_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetforsequenceclassification
|
.md
|
MPNet Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
215_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetforsequenceclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MPNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
215_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetforsequenceclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
215_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetformultiplechoice
|
.md
|
MPNet Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
215_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetformultiplechoice
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MPNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
215_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetformultiplechoice
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
215_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetfortokenclassification
|
.md
|
MPNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
215_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetfortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MPNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
215_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetfortokenclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
215_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetforquestionanswering
|
.md
|
MPNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
215_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MPNetConfig`]): Model configuration class with all the parameters of the model.
|
215_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#mpnetforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`MPNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
215_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#tfmpnetmodel
|
.md
|
No docstring available for TFMPNetModel
Methods: call
|
215_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#tfmpnetformaskedlm
|
.md
|
No docstring available for TFMPNetForMaskedLM
Methods: call
|
215_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#tfmpnetforsequenceclassification
|
.md
|
No docstring available for TFMPNetForSequenceClassification
Methods: call
|
215_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#tfmpnetformultiplechoice
|
.md
|
No docstring available for TFMPNetForMultipleChoice
Methods: call
|
215_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#tfmpnetfortokenclassification
|
.md
|
No docstring available for TFMPNetForTokenClassification
Methods: call
|
215_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/mpnet/#tfmpnetforquestionanswering
|
.md
|
No docstring available for TFMPNetForQuestionAnswering
Methods: call
</tf>
</frameworkcontent>
|
215_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
216_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
216_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#overview
|
.md
|
The ConvNeXT model was proposed in [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them.
The abstract from the paper is the following:
|
216_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#overview
|
.md
|
The abstract from the paper is the following:
*The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.
A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers
|
216_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#overview
|
.md
|
(e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide
variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive
|
216_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#overview
|
.md
|
biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually "modernize" a standard ResNet toward the design
of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models
|
216_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#overview
|
.md
|
dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy
and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.jpg"
alt="drawing" width="600"/>
|
216_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#overview
|
.md
|
alt="drawing" width="600"/>
<small> ConvNeXT architecture. Taken from the <a href="https://arxiv.org/abs/2201.03545">original paper</a>.</small>
This model was contributed by [nielsr](https://huggingface.co/nielsr). TensorFlow version of the model was contributed by [ariG23498](https://github.com/ariG23498),
[gante](https://github.com/gante), and [sayakpaul](https://github.com/sayakpaul) (equal contribution). The original code can be found [here](https://github.com/facebookresearch/ConvNeXt).
|
216_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXT.
<PipelineTag pipeline="image-classification"/>
- [`ConvNextForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
216_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#resources
|
.md
|
- See also: [Image classification task guide](../tasks/image_classification)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
216_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextconfig
|
.md
|
This is the configuration class to store the configuration of a [`ConvNextModel`]. It is used to instantiate an
ConvNeXT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the ConvNeXT
[facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
216_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
patch_size (`int`, *optional*, defaults to 4):
Patch size to use in the patch embedding layer.
num_stages (`int`, *optional*, defaults to 4):
The number of stages in the model.
hidden_sizes (`List[int]`, *optional*, defaults to [96, 192, 384, 768]):
|
216_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextconfig
|
.md
|
The number of stages in the model.
hidden_sizes (`List[int]`, *optional*, defaults to [96, 192, 384, 768]):
Dimensionality (hidden size) at each stage.
depths (`List[int]`, *optional*, defaults to [3, 3, 9, 3]):
Depth (number of blocks) for each stage.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in each block. If string, `"gelu"`, `"relu"`,
`"selu"` and `"gelu_new"` are supported.
|
216_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextconfig
|
.md
|
`"selu"` and `"gelu_new"` are supported.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
layer_scale_init_value (`float`, *optional*, defaults to 1e-6):
The initial value for the layer scale.
drop_path_rate (`float`, *optional*, defaults to 0.0):
The drop rate for stochastic depth.
|
216_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextconfig
|
.md
|
drop_path_rate (`float`, *optional*, defaults to 0.0):
The drop rate for stochastic depth.
out_features (`List[str]`, *optional*):
If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc.
(depending on how many stages the model has). If unset and `out_indices` is set, will default to the
corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the
same order as defined in the `stage_names` attribute.
|
216_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextconfig
|
.md
|
same order as defined in the `stage_names` attribute.
out_indices (`List[int]`, *optional*):
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
If unset and `out_features` is unset, will default to the last stage. Must be in the
same order as defined in the `stage_names` attribute.
Example:
```python
|
216_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextconfig
|
.md
|
same order as defined in the `stage_names` attribute.
Example:
```python
>>> from transformers import ConvNextConfig, ConvNextModel
|
216_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextconfig
|
.md
|
>>> # Initializing a ConvNext convnext-tiny-224 style configuration
>>> configuration = ConvNextConfig()
>>> # Initializing a model (with random weights) from the convnext-tiny-224 style configuration
>>> model = ConvNextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
216_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextfeatureextractor
|
.md
|
No docstring available for ConvNextFeatureExtractor
|
216_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextimageprocessor
|
.md
|
Constructs a ConvNeXT image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be overriden
by `do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 384}`):
Resolution of the output image after `resize` is applied. If `size["shortest_edge"]` >= 384, the image is
|
216_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextimageprocessor
|
.md
|
Resolution of the output image after `resize` is applied. If `size["shortest_edge"]` >= 384, the image is
resized to `(size["shortest_edge"], size["shortest_edge"])`. Otherwise, the smaller edge of the image will
be matched to `int(size["shortest_edge"]/crop_pct)`, after which the image is cropped to
`(size["shortest_edge"], size["shortest_edge"])`. Only has an effect if `do_resize` is set to `True`. Can
be overriden by `size` in the `preprocess` method.
|
216_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextimageprocessor
|
.md
|
be overriden by `size` in the `preprocess` method.
crop_pct (`float` *optional*, defaults to 224 / 256):
Percentage of the image to crop. Only has an effect if `do_resize` is `True` and size < 384. Can be
overriden by `crop_pct` in the `preprocess` method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`):
Resampling filter to use if resizing the image. Can be overriden by `resample` in the `preprocess` method.
do_rescale (`bool`, *optional*, defaults to `True`):
|
216_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextimageprocessor
|
.md
|
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overriden by `do_rescale` in
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overriden by `rescale_factor` in the `preprocess`
method.
do_normalize (`bool`, *optional*, defaults to `True`):
|
216_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextimageprocessor
|
.md
|
method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
|
216_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextimageprocessor
|
.md
|
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Methods: preprocess
<frameworkcontent>
<pt>
|
216_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextmodel
|
.md
|
The bare ConvNext model outputting raw features without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ConvNextConfig`]): Model configuration class with all the parameters of the model.
|
216_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextmodel
|
.md
|
behavior.
Parameters:
config ([`ConvNextConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
216_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextforimageclassification
|
.md
|
ConvNext Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ConvNextConfig`]): Model configuration class with all the parameters of the model.
|
216_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#convnextforimageclassification
|
.md
|
behavior.
Parameters:
config ([`ConvNextConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
216_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#tfconvnextmodel
|
.md
|
No docstring available for TFConvNextModel
Methods: call
|
216_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnext.md
|
https://huggingface.co/docs/transformers/en/model_doc/convnext/#tfconvnextforimageclassification
|
.md
|
No docstring available for TFConvNextForImageClassification
Methods: call
</tf>
</frameworkcontent>
|
216_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
217_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
217_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#overview
|
.md
|
The SegFormer model was proposed in [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping
Luo. The model consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great
results on image segmentation benchmarks such as ADE20K and Cityscapes.
The abstract from the paper is the following:
|
217_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#overview
|
.md
|
results on image segmentation benchmarks such as ADE20K and Cityscapes.
The abstract from the paper is the following:
*We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with
lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel
hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding,
|
217_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#overview
|
.md
|
hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding,
thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution
differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from
different layers, and thus combining both local attention and global attention to render powerful representations. We
|
217_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#overview
|
.md
|
different layers, and thus combining both local attention and global attention to render powerful representations. We
show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our
approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance
and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters,
|
217_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#overview
|
.md
|
and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters,
being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on
Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.*
The figure below illustrates the architecture of SegFormer. Taken from the [original paper](https://arxiv.org/abs/2105.15203).
|
217_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#overview
|
.md
|
<img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/segformer_architecture.png"/>
This model was contributed by [nielsr](https://huggingface.co/nielsr). The TensorFlow version
of the model was contributed by [sayakpaul](https://huggingface.co/sayakpaul). The original code can be found [here](https://github.com/NVlabs/SegFormer).
|
217_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
|
.md
|
- SegFormer consists of a hierarchical Transformer encoder, and a lightweight all-MLP decoder head.
[`SegformerModel`] is the hierarchical Transformer encoder (which in the paper is also referred to
as Mix Transformer or MiT). [`SegformerForSemanticSegmentation`] adds the all-MLP decoder head on
top to perform semantic segmentation of images. In addition, there's
[`SegformerForImageClassification`] which can be used to - you guessed it - classify images. The
|
217_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
|
.md
|
[`SegformerForImageClassification`] which can be used to - you guessed it - classify images. The
authors of SegFormer first pre-trained the Transformer encoder on ImageNet-1k to classify images. Next, they throw
away the classification head, and replace it by the all-MLP decode head. Next, they fine-tune the model altogether on
ADE20K, Cityscapes and COCO-stuff, which are important benchmarks for semantic segmentation. All checkpoints can be
|
217_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
|
.md
|
ADE20K, Cityscapes and COCO-stuff, which are important benchmarks for semantic segmentation. All checkpoints can be
found on the [hub](https://huggingface.co/models?other=segformer).
- The quickest way to get started with SegFormer is by checking the [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SegFormer) (which showcase both inference and
|
217_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
|
.md
|
fine-tuning on custom data). One can also check out the [blog post](https://huggingface.co/blog/fine-tune-segformer) introducing SegFormer and illustrating how it can be fine-tuned on custom data.
- TensorFlow users should refer to [this repository](https://github.com/deep-diver/segformer-tf-transformers) that shows off-the-shelf inference and fine-tuning.
- One can also check out [this interactive demo on Hugging Face Spaces](https://huggingface.co/spaces/chansung/segformer-tf-transformers)
|
217_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
|
.md
|
to try out a SegFormer model on custom images.
- SegFormer works on any input size, as it pads the input to be divisible by `config.patch_sizes`.
- One can use [`SegformerImageProcessor`] to prepare images and corresponding segmentation maps
for the model. Note that this image processor is fairly basic and does not include all data augmentations used in
|
217_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
|
.md
|
for the model. Note that this image processor is fairly basic and does not include all data augmentations used in
the original paper. The original preprocessing pipelines (for the ADE20k dataset for instance) can be found [here](https://github.com/NVlabs/SegFormer/blob/master/local_configs/_base_/datasets/ade20k_repeat.py). The most
important preprocessing step is that images and segmentation maps are randomly cropped and padded to the same size,
such as 512x512 or 640x640, after which they are normalized.
|
217_2_5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.