source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#overview
|
.md
|
The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional
Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy,
Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA
achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including
|
224_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#overview
|
.md
|
achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including
named entity recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti)
and natural language inference (XNLI).*
This model was contributed by [sijunhe](https://huggingface.co/sijunhe). The original code can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch).
|
224_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#resources
|
.md
|
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
|
224_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaconfig
|
.md
|
This is the configuration class to store the configuration of an [`NezhaModel`]. It is used to instantiate an Nezha
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Nezha
[sijunhe/nezha-cn-base](https://huggingface.co/sijunhe/nezha-cn-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
224_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, optional, defaults to 21128):
Vocabulary size of the NEZHA model. Defines the different tokens that can be represented by the
*inputs_ids* passed to the forward method of [`NezhaModel`].
hidden_size (`int`, optional, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
|
224_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaconfig
|
.md
|
hidden_size (`int`, optional, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, optional, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, optional, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, optional, defaults to 3072):
The dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
224_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaconfig
|
.md
|
The dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, optional, defaults to "gelu"):
The non-linear activation function (function or string) in the encoder and pooler.
hidden_dropout_prob (`float`, optional, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, optional, defaults to 0.1):
|
224_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaconfig
|
.md
|
attention_probs_dropout_prob (`float`, optional, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, optional, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
(e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, optional, defaults to 2):
The vocabulary size of the *token_type_ids* passed into [`NezhaModel`].
initializer_range (`float`, optional, defaults to 0.02):
|
224_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaconfig
|
.md
|
initializer_range (`float`, optional, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, optional, defaults to 1e-12):
The epsilon used by the layer normalization layers.
classifier_dropout (`float`, optional, defaults to 0.1):
The dropout ratio for attached classifiers.
is_decoder (`bool`, *optional*, defaults to `False`):
Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
|
224_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaconfig
|
.md
|
Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
Example:
```python
>>> from transformers import NezhaConfig, NezhaModel
|
224_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaconfig
|
.md
|
>>> # Initializing an Nezha configuration
>>> configuration = NezhaConfig()
>>> # Initializing a model (with random weights) from the Nezha-base style configuration model
>>> model = NezhaModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
224_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhamodel
|
.md
|
The bare Nezha Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
224_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhamodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NezhaConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
224_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhamodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in [Attention is
|
224_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhamodel
|
.md
|
cross-attention is added between the self-attention layers, following the architecture described in [Attention is
all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
|
224_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhamodel
|
.md
|
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
`add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
Methods: forward
|
224_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaforpretraining
|
.md
|
Nezha Model with two heads on top as done during the pretraining: a `masked language modeling` head and a `next
sentence prediction (classification)` head.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
224_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaforpretraining
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NezhaConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
224_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaforpretraining
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
224_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaformaskedlm
|
.md
|
Nezha Model with a `language modeling` head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
224_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaformaskedlm
|
.md
|
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NezhaConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
224_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhafornextsentenceprediction
|
.md
|
Nezha Model with a `next sentence prediction (classification)` head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
224_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhafornextsentenceprediction
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NezhaConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
224_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhafornextsentenceprediction
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
224_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaforsequenceclassification
|
.md
|
Nezha Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
224_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaforsequenceclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NezhaConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
224_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaforsequenceclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
224_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaformultiplechoice
|
.md
|
Nezha Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
224_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaformultiplechoice
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NezhaConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
224_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaformultiplechoice
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
224_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhafortokenclassification
|
.md
|
Nezha Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
224_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhafortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NezhaConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
224_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhafortokenclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
224_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaforquestionanswering
|
.md
|
Nezha Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
224_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NezhaConfig`]): Model configuration class with all the parameters of the model.
|
224_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezhaforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`NezhaConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
224_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
225_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
225_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#overview
|
.md
|
The Audio Spectrogram Transformer model was proposed in [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass.
The Audio Spectrogram Transformer applies a [Vision Transformer](vit) to audio, by turning audio into an image (spectrogram). The model obtains state-of-the-art results
for audio classification.
The abstract from the paper is the following:
|
225_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#overview
|
.md
|
*In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention
|
225_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#overview
|
.md
|
model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6%
|
225_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#overview
|
.md
|
AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2.*
|
225_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/audio_spectogram_transformer_architecture.png"
alt="drawing" width="600"/>
<small> Audio Spectrogram Transformer architecture. Taken from the <a href="https://arxiv.org/abs/2104.01778">original paper</a>.</small>
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/YuanGongND/ast).
|
225_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#usage-tips
|
.md
|
- When fine-tuning the Audio Spectrogram Transformer (AST) on your own dataset, it's recommended to take care of the input normalization (to make
sure the input has mean of 0 and std of 0.5). [`ASTFeatureExtractor`] takes care of this. Note that it uses the AudioSet
mean and std by default. You can check [`ast/src/get_norm_stats.py`](https://github.com/YuanGongND/ast/blob/master/src/get_norm_stats.py) to see how
the authors compute the stats for a downstream dataset.
|
225_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#usage-tips
|
.md
|
the authors compute the stats for a downstream dataset.
- Note that the AST needs a low learning rate (the authors use a 10 times smaller learning rate compared to their CNN model proposed in the
[PSLA paper](https://arxiv.org/abs/2102.01243)) and converges quickly, so please search for a suitable learning rate and learning rate scheduler for your task.
|
225_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#using-scaled-dot-product-attention-sdpa
|
.md
|
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.
|
225_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#using-scaled-dot-product-attention-sdpa
|
.md
|
page for more information.
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
```
from transformers import ASTForAudioClassification
model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-audioset-10-10-0.4593", attn_implementation="sdpa", torch_dtype=torch.float16)
...
```
|
225_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#using-scaled-dot-product-attention-sdpa
|
.md
|
...
```
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `MIT/ast-finetuned-audioset-10-10-0.4593` model, we saw the following speedups during inference.
| Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) |
|
225_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#using-scaled-dot-product-attention-sdpa
|
.md
|
|--------------|-------------------------------------------|-------------------------------------------|------------------------------|
| 1 | 27 | 6 | 4.5 |
| 2 | 12 | 6 | 2 |
|
225_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#using-scaled-dot-product-attention-sdpa
|
.md
|
| 4 | 21 | 8 | 2.62 |
| 8 | 40 | 14 | 2.86 |
|
225_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with the Audio Spectrogram Transformer.
<PipelineTag pipeline="audio-classification"/>
- A notebook illustrating inference with AST for audio classification can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/AST).
|
225_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#resources
|
.md
|
- [`ASTForAudioClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb).
- See also: [Audio classification](../tasks/audio_classification).
|
225_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#resources
|
.md
|
- See also: [Audio classification](../tasks/audio_classification).
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
225_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astconfig
|
.md
|
This is the configuration class to store the configuration of a [`ASTModel`]. It is used to instantiate an AST
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the AST
[MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
architecture.
|
225_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astconfig
|
.md
|
[MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
|
225_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astconfig
|
.md
|
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
225_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astconfig
|
.md
|
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
225_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
|
225_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astconfig
|
.md
|
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries, keys and values.
frequency_stride (`int`, *optional*, defaults to 10):
Frequency stride to use when patchifying the spectrograms.
time_stride (`int`, *optional*, defaults to 10):
|
225_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astconfig
|
.md
|
Frequency stride to use when patchifying the spectrograms.
time_stride (`int`, *optional*, defaults to 10):
Temporal stride to use when patchifying the spectrograms.
max_length (`int`, *optional*, defaults to 1024):
Temporal dimension of the spectrograms.
num_mel_bins (`int`, *optional*, defaults to 128):
Frequency dimension of the spectrograms (number of Mel-frequency bins).
Example:
```python
>>> from transformers import ASTConfig, ASTModel
|
225_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astconfig
|
.md
|
>>> # Initializing a AST MIT/ast-finetuned-audioset-10-10-0.4593 style configuration
>>> configuration = ASTConfig()
>>> # Initializing a model (with random weights) from the MIT/ast-finetuned-audioset-10-10-0.4593 style configuration
>>> model = ASTModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
225_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astfeatureextractor
|
.md
|
Constructs a Audio Spectrogram Transformer (AST) feature extractor.
This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
This class extracts mel-filter bank features from raw speech using TorchAudio if installed or using numpy
otherwise, pads/truncates them to a fixed length and normalizes them using a mean and standard deviation.
|
225_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astfeatureextractor
|
.md
|
otherwise, pads/truncates them to a fixed length and normalizes them using a mean and standard deviation.
Args:
feature_size (`int`, *optional*, defaults to 1):
The feature dimension of the extracted features.
sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
num_mel_bins (`int`, *optional*, defaults to 128):
Number of Mel-frequency bins.
max_length (`int`, *optional*, defaults to 1024):
|
225_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astfeatureextractor
|
.md
|
Number of Mel-frequency bins.
max_length (`int`, *optional*, defaults to 1024):
Maximum length to which to pad/truncate the extracted features.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether or not to normalize the log-Mel features using `mean` and `std`.
mean (`float`, *optional*, defaults to -4.2677393):
The mean value used to normalize the log-Mel features. Uses the AudioSet mean by default.
std (`float`, *optional*, defaults to 4.5689974):
|
225_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astfeatureextractor
|
.md
|
std (`float`, *optional*, defaults to 4.5689974):
The standard deviation value used to normalize the log-Mel features. Uses the AudioSet standard deviation
by default.
return_attention_mask (`bool`, *optional*, defaults to `False`):
Whether or not [`~ASTFeatureExtractor.__call__`] should return `attention_mask`.
Methods: __call__
|
225_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astmodel
|
.md
|
The bare AST Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ASTConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
|
225_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astmodel
|
.md
|
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
225_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astforaudioclassification
|
.md
|
Audio Spectrogram Transformer model with an audio classification head on top (a linear layer on top of the pooled
output) e.g. for datasets like AudioSet, Speech Commands v2.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ASTConfig`]):
|
225_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/audio-spectrogram-transformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/audio-spectrogram-transformer/#astforaudioclassification
|
.md
|
behavior.
Parameters:
config ([`ASTConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
225_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
226_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
226_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#overview
|
.md
|
The Mask2Former model was proposed in [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. Mask2Former is a unified framework for panoptic, instance and semantic segmentation and features significant performance and efficiency improvements over [MaskFormer](maskformer).
The abstract from the paper is the following:
|
226_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#overview
|
.md
|
The abstract from the paper is the following:
*Image segmentation groups pixels with different semantics, e.g., category or instance membership. Each choice
|
226_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#overview
|
.md
|
of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at
|
226_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#overview
|
.md
|
features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K).*
|
226_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/mask2former_architecture.jpg" alt="drawing" width="600"/>
<small> Mask2Former architecture. Taken from the <a href="https://arxiv.org/abs/2112.01527">original paper.</a> </small>
This model was contributed by [Shivalika Singh](https://huggingface.co/shivi) and [Alara Dirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/facebookresearch/Mask2Former).
|
226_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#usage-tips
|
.md
|
- Mask2Former uses the same preprocessing and postprocessing steps as [MaskFormer](maskformer). Use [`Mask2FormerImageProcessor`] or [`AutoImageProcessor`] to prepare images and optional targets for the model.
|
226_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#usage-tips
|
.md
|
- To get the final segmentation, depending on the task, you can call [`~Mask2FormerImageProcessor.post_process_semantic_segmentation`] or [`~Mask2FormerImageProcessor.post_process_instance_segmentation`] or [`~Mask2FormerImageProcessor.post_process_panoptic_segmentation`]. All three tasks can be solved using [`Mask2FormerForUniversalSegmentation`] output, panoptic segmentation accepts an optional `label_ids_to_fuse` argument to fuse instances of the target object/s (e.g. sky) together.
|
226_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mask2Former.
- Demo notebooks regarding inference + fine-tuning Mask2Former on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Mask2Former).
|
226_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#resources
|
.md
|
- Scripts for finetuning [`Mask2Former`] with [`Trainer`] or [Accelerate](https://huggingface.co/docs/accelerate/index) can be found [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/instance-segmentation).
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
226_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
This is the configuration class to store the configuration of a [`Mask2FormerModel`]. It is used to instantiate a
Mask2Former model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Mask2Former
[facebook/mask2former-swin-small-coco-instance](https://huggingface.co/facebook/mask2former-swin-small-coco-instance)
architecture.
|
226_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Currently, Mask2Former only supports the [Swin Transformer](swin) as backbone.
Args:
backbone_config (`PretrainedConfig` or `dict`, *optional*, defaults to `SwinConfig()`):
The configuration of the backbone model. If unset, the configuration corresponding to
`swin-base-patch4-window12-384` will be used.
|
226_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
`swin-base-patch4-window12-384` will be used.
backbone (`str`, *optional*):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, `False`):
Whether to use pretrained weights for the backbone.
|
226_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
use_pretrained_backbone (`bool`, *optional*, `False`):
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, `False`):
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
library.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
|
226_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
feature_size (`int`, *optional*, defaults to 256):
The features (channels) of the resulting feature maps.
mask_feature_size (`int`, *optional*, defaults to 256):
The masks' features size, this value will also be used to specify the Feature Pyramid Network features'
size.
hidden_dim (`int`, *optional*, defaults to 256):
Dimensionality of the encoder layers.
encoder_feedforward_dim (`int`, *optional*, defaults to 1024):
|
226_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
Dimensionality of the encoder layers.
encoder_feedforward_dim (`int`, *optional*, defaults to 1024):
Dimension of feedforward network for deformable detr encoder used as part of pixel decoder.
encoder_layers (`int`, *optional*, defaults to 6):
Number of layers in the deformable detr encoder used as part of pixel decoder.
decoder_layers (`int`, *optional*, defaults to 10):
Number of layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 8):
|
226_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
Number of layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder.
dim_feedforward (`int`, *optional*, defaults to 2048):
Feature dimension in feedforward network for transformer decoder.
pre_norm (`bool`, *optional*, defaults to `False`):
|
226_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
Feature dimension in feedforward network for transformer decoder.
pre_norm (`bool`, *optional*, defaults to `False`):
Whether to use pre-LayerNorm or not for transformer decoder.
enforce_input_projection (`bool`, *optional*, defaults to `False`):
Whether to add an input projection 1x1 convolution even if the input channels and hidden dim are identical
in the Transformer decoder.
common_stride (`int`, *optional*, defaults to 4):
|
226_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
in the Transformer decoder.
common_stride (`int`, *optional*, defaults to 4):
Parameter used for determining number of FPN levels used as part of pixel decoder.
ignore_value (`int`, *optional*, defaults to 255):
Category id to be ignored during training.
num_queries (`int`, *optional*, defaults to 100):
Number of queries for the decoder.
no_object_weight (`int`, *optional*, defaults to 0.1):
The weight to apply to the null (no object) class.
class_weight (`int`, *optional*, defaults to 2.0):
|
226_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
The weight to apply to the null (no object) class.
class_weight (`int`, *optional*, defaults to 2.0):
The weight for the cross entropy loss.
mask_weight (`int`, *optional*, defaults to 5.0):
The weight for the mask loss.
dice_weight (`int`, *optional*, defaults to 5.0):
The weight for the dice loss.
train_num_points (`str` or `function`, *optional*, defaults to 12544):
Number of points used for sampling during loss calculation.
oversample_ratio (`float`, *optional*, defaults to 3.0):
|
226_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
Number of points used for sampling during loss calculation.
oversample_ratio (`float`, *optional*, defaults to 3.0):
Oversampling parameter used for calculating no. of sampled points
importance_sample_ratio (`float`, *optional*, defaults to 0.75):
Ratio of points that are sampled via importance sampling.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (`float`, *optional*, defaults to 1.0):
|
226_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
init_xavier_std (`float`, *optional*, defaults to 1.0):
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
use_auxiliary_loss (`boolean``, *optional*, defaults to `True`):
If `True` [`Mask2FormerForUniversalSegmentationOutput`] will contain the auxiliary losses computed using
the logits from each decoder's stage.
feature_strides (`List[int]`, *optional*, defaults to `[4, 8, 16, 32]`):
Feature strides corresponding to features generated from backbone network.
|
226_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
Feature strides corresponding to features generated from backbone network.
output_auxiliary_logits (`bool`, *optional*):
Should the model output its `auxiliary_logits` or not.
Examples:
```python
>>> from transformers import Mask2FormerConfig, Mask2FormerModel
|
226_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerconfig
|
.md
|
>>> # Initializing a Mask2Former facebook/mask2former-swin-small-coco-instance configuration
>>> configuration = Mask2FormerConfig()
>>> # Initializing a model (with random weights) from the facebook/mask2former-swin-small-coco-instance style configuration
>>> model = Mask2FormerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
226_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
models.mask2former.modeling_mask2former.Mask2FormerModelOutput
Class for outputs of [`Mask2FormerModel`]. This class returns all the needed hidden states to compute the logits.
Args:
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`, *optional*):
Last hidden states (final feature map) of the last stage of the encoder model (backbone). Returned when
`output_hidden_states=True` is passed.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*):
|
226_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
`output_hidden_states=True` is passed.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the encoder
model at the output of each stage. Returned when `output_hidden_states=True` is passed.
|
226_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
model at the output of each stage. Returned when `output_hidden_states=True` is passed.
pixel_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`, *optional*):
Last hidden states (final feature map) of the last stage of the pixel decoder model.
pixel_decoder_hidden_states (`tuple(torch.FloatTensor)`, , *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
226_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage. Returned when `output_hidden_states=True` is passed.
transformer_decoder_last_hidden_state (`tuple(torch.FloatTensor)`):
Final output of the transformer decoder `(batch_size, sequence_length, hidden_size)`.
|
226_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
Final output of the transformer decoder `(batch_size, sequence_length, hidden_size)`.
transformer_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage. Returned when `output_hidden_states=True` is passed.
|
226_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
transformer decoder at the output of each stage. Returned when `output_hidden_states=True` is passed.
transformer_decoder_intermediate_states (`tuple(torch.FloatTensor)` of shape `(num_queries, 1, hidden_size)`):
Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a
layernorm.
masks_queries_logits (`tuple(torch.FloatTensor)` of shape `(batch_size, num_queries, height, width)`)
Mask Predictions from each layer in the transformer decoder.
|
226_5_5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.