source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#overview
|
.md
|
training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference. Thirdly, we propose using a query-text contrastive loss during training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across
|
286_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#overview
|
.md
|
inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across all three segmentation tasks on ADE20k, CityScapes, and COCO, despite the latter being trained on each of the three tasks individually with three times the resources. With new ConvNeXt and DiNAT backbones, we observe even more performance improvement. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.*
|
286_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#overview
|
.md
|
The figure below illustrates the architecture of OneFormer. Taken from the [original paper](https://arxiv.org/abs/2211.06220).
<img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/oneformer_architecture.png"/>
This model was contributed by [Jitesh Jain](https://huggingface.co/praeclarumjj3). The original code can be found [here](https://github.com/SHI-Labs/OneFormer).
|
286_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#usage-tips
|
.md
|
- OneFormer requires two inputs during inference: *image* and *task token*.
- During training, OneFormer only uses panoptic annotations.
- If you want to train the model in a distributed environment across multiple nodes, then one should update the
`get_num_masks` function inside in the `OneFormerLoss` class of `modeling_oneformer.py`. When training on multiple nodes, this should be
|
286_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#usage-tips
|
.md
|
set to the average number of target masks across all nodes, as can be seen in the original implementation [here](https://github.com/SHI-Labs/OneFormer/blob/33ebb56ed34f970a30ae103e786c0cb64c653d9a/oneformer/modeling/criterion.py#L287).
|
286_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#usage-tips
|
.md
|
- One can use [`OneFormerProcessor`] to prepare input images and task inputs for the model and optional targets for the model. [`OneFormerProcessor`] wraps [`OneFormerImageProcessor`] and [`CLIPTokenizer`] into a single instance to both prepare the images and encode the task inputs.
|
286_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#usage-tips
|
.md
|
- To get the final segmentation, depending on the task, you can call [`~OneFormerProcessor.post_process_semantic_segmentation`] or [`~OneFormerImageProcessor.post_process_instance_segmentation`] or [`~OneFormerImageProcessor.post_process_panoptic_segmentation`]. All three tasks can be solved using [`OneFormerForUniversalSegmentation`] output, panoptic segmentation accepts an optional `label_ids_to_fuse` argument to fuse instances of the target object/s (e.g. sky) together.
|
286_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OneFormer.
- Demo notebooks regarding inference + fine-tuning on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/OneFormer).
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
286_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
models.oneformer.modeling_oneformer.OneFormerModelOutput
Class for outputs of [`OneFormerModel`]. This class returns all the needed hidden states to compute the logits.
Args:
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
|
286_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
286_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
286_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
transformer_decoder_object_queries (`torch.FloatTensor` of shape `(batch_size, num_queries, hidden_dim)`)
Output object queries from the last layer in the transformer decoder.
|
286_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
Output object queries from the last layer in the transformer decoder.
transformer_decoder_contrastive_queries (`torch.FloatTensor` of shape `(batch_size, num_queries, hidden_dim)`)
Contrastive queries from the transformer decoder.
transformer_decoder_mask_predictions (`torch.FloatTensor` of shape `(batch_size, num_queries, height, width)`)
Mask Predictions from the last layer in the transformer decoder.
|
286_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
Mask Predictions from the last layer in the transformer decoder.
transformer_decoder_class_predictions (`torch.FloatTensor` of shape `(batch_size, num_queries, num_classes+1)`):
Class Predictions from the last layer in the transformer decoder.
transformer_decoder_auxiliary_predictions (Tuple of Dict of `str, torch.FloatTensor`, *optional*):
Tuple of class and mask predictions from each layer of the transformer decoder.
|
286_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
Tuple of class and mask predictions from each layer of the transformer decoder.
text_queries (`torch.FloatTensor`, *optional* of shape `(batch_size, num_queries, hidden_dim)`)
Text queries derived from the input text list used for calculating contrastive loss during training.
task_token (`torch.FloatTensor` of shape `(batch_size, hidden_dim)`)
1D task token to condition the queries.
|
286_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
task_token (`torch.FloatTensor` of shape `(batch_size, hidden_dim)`)
1D task token to condition the queries.
attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `tuple(torch.FloatTensor)` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`. Self and Cross Attentions weights from transformer decoder.
|
286_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
sequence_length)`. Self and Cross Attentions weights from transformer decoder.
models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput
Class for outputs of [`OneFormerForUniversalSegmentationOutput`].
This output can be directly passed to [`~OneFormerImageProcessor.post_process_semantic_segmentation`] or
[`~OneFormerImageProcessor.post_process_instance_segmentation`] or
[`~OneFormerImageProcessor.post_process_panoptic_segmentation`] depending on the task. Please, see
|
286_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
[`~OneFormerImageProcessor.post_process_panoptic_segmentation`] depending on the task. Please, see
[`~OneFormerImageProcessor] for details regarding usage.
Args:
loss (`torch.Tensor`, *optional*):
The computed loss, returned when labels are present.
class_queries_logits (`torch.FloatTensor`):
A tensor of shape `(batch_size, num_queries, num_labels + 1)` representing the proposed classes for each
query. Note the `+ 1` is needed because we incorporate the null class.
|
286_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
query. Note the `+ 1` is needed because we incorporate the null class.
masks_queries_logits (`torch.FloatTensor`):
A tensor of shape `(batch_size, num_queries, height, width)` representing the proposed masks for each
query.
auxiliary_predictions (List of Dict of `str, torch.FloatTensor`, *optional*):
List of class and mask predictions from each layer of the transformer decoder.
|
286_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
List of class and mask predictions from each layer of the transformer decoder.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
|
286_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
model at the output of each stage.
pixel_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
|
286_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
decoder model at the output of each stage.
transformer_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
|
286_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
transformer decoder at the output of each stage.
transformer_decoder_object_queries (`torch.FloatTensor` of shape `(batch_size, num_queries, hidden_dim)`)
Output object queries from the last layer in the transformer decoder.
transformer_decoder_contrastive_queries (`torch.FloatTensor` of shape `(batch_size, num_queries, hidden_dim)`)
Contrastive queries from the transformer decoder.
transformer_decoder_mask_predictions (`torch.FloatTensor` of shape `(batch_size, num_queries, height, width)`)
|
286_4_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
transformer_decoder_mask_predictions (`torch.FloatTensor` of shape `(batch_size, num_queries, height, width)`)
Mask Predictions from the last layer in the transformer decoder.
transformer_decoder_class_predictions (`torch.FloatTensor` of shape `(batch_size, num_queries, num_classes+1)`):
Class Predictions from the last layer in the transformer decoder.
transformer_decoder_auxiliary_predictions (List of Dict of `str, torch.FloatTensor`, *optional*):
|
286_4_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
transformer_decoder_auxiliary_predictions (List of Dict of `str, torch.FloatTensor`, *optional*):
List of class and mask predictions from each layer of the transformer decoder.
text_queries (`torch.FloatTensor`, *optional* of shape `(batch_size, num_queries, hidden_dim)`)
Text queries derived from the input text list used for calculating contrastive loss during training.
task_token (`torch.FloatTensor` of shape `(batch_size, hidden_dim)`)
1D task token to condition the queries.
|
286_4_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
|
.md
|
task_token (`torch.FloatTensor` of shape `(batch_size, hidden_dim)`)
1D task token to condition the queries.
attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `tuple(torch.FloatTensor)` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`. Self and Cross Attentions weights from transformer decoder.
|
286_4_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
This is the configuration class to store the configuration of a [`OneFormerModel`]. It is used to instantiate a
OneFormer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the OneFormer
[shi-labs/oneformer_ade20k_swin_tiny](https://huggingface.co/shi-labs/oneformer_ade20k_swin_tiny) architecture
trained on [ADE20k-150](https://huggingface.co/datasets/scene_parse_150).
|
286_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
trained on [ADE20k-150](https://huggingface.co/datasets/scene_parse_150).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
backbone_config (`PretrainedConfig`, *optional*, defaults to `SwinConfig`):
The configuration of the backbone model.
backbone (`str`, *optional*):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
|
286_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, defaults to `False`):
|
286_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, defaults to `False`):
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
library.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
ignore_value (`int`, *optional*, defaults to 255):
|
286_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
ignore_value (`int`, *optional*, defaults to 255):
Values to be ignored in GT label while calculating loss.
num_queries (`int`, *optional*, defaults to 150):
Number of object queries.
no_object_weight (`float`, *optional*, defaults to 0.1):
Weight for no-object class predictions.
class_weight (`float`, *optional*, defaults to 2.0):
Weight for Classification CE loss.
mask_weight (`float`, *optional*, defaults to 5.0):
Weight for binary CE loss.
dice_weight (`float`, *optional*, defaults to 5.0):
|
286_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
Weight for binary CE loss.
dice_weight (`float`, *optional*, defaults to 5.0):
Weight for dice loss.
contrastive_weight (`float`, *optional*, defaults to 0.5):
Weight for contrastive loss.
contrastive_temperature (`float`, *optional*, defaults to 0.07):
Initial value for scaling the contrastive logits.
train_num_points (`int`, *optional*, defaults to 12544):
Number of points to sample while calculating losses on mask predictions.
oversample_ratio (`float`, *optional*, defaults to 3.0):
|
286_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
oversample_ratio (`float`, *optional*, defaults to 3.0):
Ratio to decide how many points to oversample.
importance_sample_ratio (`float`, *optional*, defaults to 0.75):
Ratio of points that are sampled via importance sampling.
init_std (`float`, *optional*, defaults to 0.02):
Standard deviation for normal intialization.
init_xavier_std (`float`, *optional*, defaults to 1.0):
Standard deviation for xavier uniform initialization.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
|
286_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
Standard deviation for xavier uniform initialization.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
Epsilon for layer normalization.
is_training (`bool`, *optional*, defaults to `False`):
Whether to run in training or inference mode.
use_auxiliary_loss (`bool`, *optional*, defaults to `True`):
Whether to calculate loss using intermediate predictions from transformer decoder.
output_auxiliary_logits (`bool`, *optional*, defaults to `True`):
|
286_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
output_auxiliary_logits (`bool`, *optional*, defaults to `True`):
Whether to return intermediate predictions from transformer decoder.
strides (`list`, *optional*, defaults to `[4, 8, 16, 32]`):
List containing the strides for feature maps in the encoder.
task_seq_len (`int`, *optional*, defaults to 77):
Sequence length for tokenizing text list input.
text_encoder_width (`int`, *optional*, defaults to 256):
Hidden size for text encoder.
text_encoder_context_length (`int`, *optional*, defaults to 77):
|
286_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
Hidden size for text encoder.
text_encoder_context_length (`int`, *optional*, defaults to 77):
Input sequence length for text encoder.
text_encoder_num_layers (`int`, *optional*, defaults to 6):
Number of layers for transformer in text encoder.
text_encoder_vocab_size (`int`, *optional*, defaults to 49408):
Vocabulary size for tokenizer.
text_encoder_proj_layers (`int`, *optional*, defaults to 2):
Number of layers in MLP for project text queries.
text_encoder_n_ctx (`int`, *optional*, defaults to 16):
|
286_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
Number of layers in MLP for project text queries.
text_encoder_n_ctx (`int`, *optional*, defaults to 16):
Number of learnable text context queries.
conv_dim (`int`, *optional*, defaults to 256):
Feature map dimension to map outputs from the backbone.
mask_dim (`int`, *optional*, defaults to 256):
Dimension for feature maps in pixel decoder.
hidden_dim (`int`, *optional*, defaults to 256):
Dimension for hidden states in transformer decoder.
encoder_feedforward_dim (`int`, *optional*, defaults to 1024):
|
286_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
Dimension for hidden states in transformer decoder.
encoder_feedforward_dim (`int`, *optional*, defaults to 1024):
Dimension for FFN layer in pixel decoder.
norm (`str`, *optional*, defaults to `"GN"`):
Type of normalization.
encoder_layers (`int`, *optional*, defaults to 6):
Number of layers in pixel decoder.
decoder_layers (`int`, *optional*, defaults to 10):
Number of layers in transformer decoder.
use_task_norm (`bool`, *optional*, defaults to `True`):
Whether to normalize the task token.
|
286_5_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
use_task_norm (`bool`, *optional*, defaults to `True`):
Whether to normalize the task token.
num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads in transformer layers in the pixel and transformer decoders.
dropout (`float`, *optional*, defaults to 0.1):
Dropout probability for pixel and transformer decoders.
dim_feedforward (`int`, *optional*, defaults to 2048):
Dimension for FFN layer in transformer decoder.
pre_norm (`bool`, *optional*, defaults to `False`):
|
286_5_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
Dimension for FFN layer in transformer decoder.
pre_norm (`bool`, *optional*, defaults to `False`):
Whether to normalize hidden states before attention layers in transformer decoder.
enforce_input_proj (`bool`, *optional*, defaults to `False`):
Whether to project hidden states in transformer decoder.
query_dec_layers (`int`, *optional*, defaults to 2):
Number of layers in query transformer.
common_stride (`int`, *optional*, defaults to 4):
Common stride used for features in pixel decoder.
Examples:
|
286_5_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
common_stride (`int`, *optional*, defaults to 4):
Common stride used for features in pixel decoder.
Examples:
```python
>>> from transformers import OneFormerConfig, OneFormerModel
|
286_5_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
|
.md
|
>>> # Initializing a OneFormer shi-labs/oneformer_ade20k_swin_tiny configuration
>>> configuration = OneFormerConfig()
>>> # Initializing a model (with random weights) from the shi-labs/oneformer_ade20k_swin_tiny style configuration
>>> model = OneFormerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
286_5_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerimageprocessor
|
.md
|
Constructs a OneFormer image processor. The image processor can be used to prepare image(s), task input(s) and
optional text inputs and targets for the model.
This image processor inherits from [`BaseImageProcessor`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the input to a certain `size`.
size (`int`, *optional*, defaults to 800):
|
286_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerimageprocessor
|
.md
|
Whether to resize the input to a certain `size`.
size (`int`, *optional*, defaults to 800):
Resize the input to the given size. Only has an effect if `do_resize` is set to `True`. If size is a
sequence like `(width, height)`, output size will be matched to this. If size is an int, smaller edge of
the image will be matched to this number. i.e, if `height > width`, then image will be rescaled to `(size *
height / width, size)`.
resample (`int`, *optional*, defaults to `Resampling.BILINEAR`):
|
286_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerimageprocessor
|
.md
|
height / width, size)`.
resample (`int`, *optional*, defaults to `Resampling.BILINEAR`):
An optional resampling filter. This can be one of `PIL.Image.Resampling.NEAREST`,
`PIL.Image.Resampling.BOX`, `PIL.Image.Resampling.BILINEAR`, `PIL.Image.Resampling.HAMMING`,
`PIL.Image.Resampling.BICUBIC` or `PIL.Image.Resampling.LANCZOS`. Only has an effect if `do_resize` is set
to `True`.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the input to a certain `scale`.
|
286_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerimageprocessor
|
.md
|
to `True`.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the input to a certain `scale`.
rescale_factor (`float`, *optional*, defaults to `1/ 255`):
Rescale the input by the given factor. Only has an effect if `do_rescale` is set to `True`.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether or not to normalize the input with mean and standard deviation.
image_mean (`int`, *optional*, defaults to `[0.485, 0.456, 0.406]`):
|
286_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerimageprocessor
|
.md
|
image_mean (`int`, *optional*, defaults to `[0.485, 0.456, 0.406]`):
The sequence of means for each channel, to be used when normalizing images. Defaults to the ImageNet mean.
image_std (`int`, *optional*, defaults to `[0.229, 0.224, 0.225]`):
The sequence of standard deviations for each channel, to be used when normalizing images. Defaults to the
ImageNet std.
ignore_index (`int`, *optional*):
Label to be assigned to background pixels in segmentation maps. If provided, segmentation map pixels
|
286_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerimageprocessor
|
.md
|
Label to be assigned to background pixels in segmentation maps. If provided, segmentation map pixels
denoted with 0 (background) will be replaced with `ignore_index`.
do_reduce_labels (`bool`, *optional*, defaults to `False`):
Whether or not to decrement all label values of segmentation maps by 1. Usually used for datasets where 0
is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k).
The background label will be replaced by `ignore_index`.
|
286_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerimageprocessor
|
.md
|
The background label will be replaced by `ignore_index`.
repo_path (`str`, *optional*, defaults to `"shi-labs/oneformer_demo"`):
Path to hub repo or local directory containing the JSON file with class information for the dataset.
If unset, will look for `class_info_file` in the current working directory.
class_info_file (`str`, *optional*):
JSON file containing class information for the dataset. See `shi-labs/oneformer_demo/cityscapes_panoptic.json` for an example.
num_text (`int`, *optional*):
|
286_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerimageprocessor
|
.md
|
num_text (`int`, *optional*):
Number of text entries in the text input list.
num_labels (`int`, *optional*):
The number of labels in the segmentation map.
Methods: preprocess
- encode_inputs
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation
|
286_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerprocessor
|
.md
|
Constructs an OneFormer processor which wraps [`OneFormerImageProcessor`] and
[`CLIPTokenizer`]/[`CLIPTokenizerFast`] into a single processor that inherits both the image processor and
tokenizer functionalities.
Args:
image_processor ([`OneFormerImageProcessor`]):
The image processor is a required input.
tokenizer ([`CLIPTokenizer`, `CLIPTokenizerFast`]):
The tokenizer is a required input.
max_seq_len (`int`, *optional*, defaults to 77)):
Sequence length for input text list.
|
286_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerprocessor
|
.md
|
The tokenizer is a required input.
max_seq_len (`int`, *optional*, defaults to 77)):
Sequence length for input text list.
task_seq_len (`int`, *optional*, defaults to 77):
Sequence length for input task token.
|
286_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformermodel
|
.md
|
The bare OneFormer Model outputting raw hidden-states without any specific head on top.
This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters:
config ([`OneFormerConfig`]): Model configuration class with all the parameters of the model.
|
286_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformermodel
|
.md
|
Parameters:
config ([`OneFormerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
286_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerforuniversalsegmentation
|
.md
|
OneFormer Model for instance, semantic and panoptic image segmentation.
This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters:
config ([`OneFormerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
286_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerforuniversalsegmentation
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
286_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
287_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
287_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#overview
|
.md
|
SEW (Squeezed and Efficient Wav2Vec) was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training
for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q.
Weinberger, Yoav Artzi.
The abstract from the paper is the following:
*This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
|
287_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#overview
|
.md
|
*This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
|
287_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#overview
|
.md
|
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
time, SEW reduces word error rate by 25-50% across different model sizes.*
This model was contributed by [anton-l](https://huggingface.co/anton-l).
|
287_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#usage-tips
|
.md
|
- SEW is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
- SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using
[`Wav2Vec2CTCTokenizer`].
|
287_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#resources
|
.md
|
- [Audio classification task guide](../tasks/audio_classification)
- [Automatic speech recognition task guide](../tasks/asr)
|
287_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
This is the configuration class to store the configuration of a [`SEWModel`]. It is used to instantiate a SEW model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the SEW
[asapp/sew-tiny-100k](https://huggingface.co/asapp/sew-tiny-100k) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
287_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32):
Vocabulary size of the SEW model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`SEW`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
|
287_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
287_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
squeeze_factor (`int`, *optional*, defaults to 2):
Sequence length downsampling factor after the encoder and upsampling factor after the transformer.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
|
287_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for activations inside the fully connected layer.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
final_dropout (`float`, *optional*, defaults to 0.1):
|
287_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
The dropout ratio for the attention probabilities.
final_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the final projection layer of [`SEWForCTC`].
layerdrop (`float`, *optional*, defaults to 0.1):
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
287_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
feat_extract_norm (`str`, *optional*, defaults to `"group"`):
The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group
normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D
convolutional layers.
|
287_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for output of the feature encoder.
feat_extract_activation (`str, `optional`, defaults to `"gelu"`):
The non-linear activation function (function or string) in the 1D convolutional layers of the feature
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
|
287_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)`):
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
|
287_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)`):
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*.
|
287_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*.
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of *conv_kernel* defines the number of convolutional layers and has to match the length of
*conv_dim*.
conv_bias (`bool`, *optional*, defaults to `False`):
|
287_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
*conv_dim*.
conv_bias (`bool`, *optional*, defaults to `False`):
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16):
Number of groups of 1D convolutional positional embeddings layer.
apply_spec_augment (`bool`, *optional*, defaults to `True`):
|
287_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
Number of groups of 1D convolutional positional embeddings layer.
apply_spec_augment (`bool`, *optional*, defaults to `True`):
Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see
[SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition](https://arxiv.org/abs/1904.08779).
mask_time_prob (`float`, *optional*, defaults to 0.05):
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
|
287_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
|
287_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`.
mask_time_length (`int`, *optional*, defaults to 10):
Length of vector span along the time axis.
mask_time_min_masks (`int`, *optional*, defaults to 2),:
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
|
287_4_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks''
mask_feature_prob (`float`, *optional*, defaults to 0.0):
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over
|
287_4_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is
True`.
mask_feature_length (`int`, *optional*, defaults to 10):
|
287_4_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
True`.
mask_feature_length (`int`, *optional*, defaults to 10):
Length of vector span along the feature axis.
mask_feature_min_masks (`int`, *optional*, defaults to 0),:
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
step, irrespectively of `mask_feature_prob`. Only relevant if
''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks''
ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
|
287_4_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
instance of [`SEWForCTC`].
ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of [`SEWForCTC`].
|
287_4_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of [`SEWForCTC`].
use_weighted_layer_sum (`bool`, *optional*, defaults to `False`):
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of [`Wav2Vec2ForSequenceClassification`].
classifier_proj_size (`int`, *optional*, defaults to 256):
Dimensionality of the projection before token mean-pooling for classification.
Example:
```python
|
287_4_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
Dimensionality of the projection before token mean-pooling for classification.
Example:
```python
>>> from transformers import SEWConfig, SEWModel
|
287_4_20
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
|
.md
|
>>> # Initializing a SEW asapp/sew-tiny-100k style configuration
>>> configuration = SEWConfig()
>>> # Initializing a model (with random weights) from the asapp/sew-tiny-100k style configuration
>>> model = SEWModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
287_4_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewmodel
|
.md
|
The bare SEW Model transformer outputting raw hidden-states without any specific head on top.
SEW was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
287_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewmodel
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`SEWConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
287_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
287_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewforctc
|
.md
|
SEW Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC).
SEW was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
287_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewforctc
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`SEWConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
287_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewforctc
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
287_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewforsequenceclassification
|
.md
|
SEW Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB
Keyword Spotting.
SEW was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
287_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewforsequenceclassification
|
.md
|
Yoav Artzi.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
287_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
|
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewforsequenceclassification
|
.md
|
behavior.
Parameters:
config ([`SEWConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
287_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
288_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
288_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#overview
|
.md
|
The AltCLIP model was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679v2) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. AltCLIP
(Altering the Language Encoder in CLIP) is a neural network trained on a variety of image-text and text-text pairs. By switching CLIP's
|
288_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#overview
|
.md
|
text encoder with a pretrained multilingual text encoder XLM-R, we could obtain very close performances with CLIP on almost all tasks, and extended original CLIP's capabilities such as multilingual understanding.
The abstract from the paper is the following:
*In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model.
|
288_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#overview
|
.md
|
Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained
multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of
teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art
|
288_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
|
https://huggingface.co/docs/transformers/en/model_doc/altclip/#overview
|
.md
|
performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with
CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.*
This model was contributed by [jongjyh](https://huggingface.co/jongjyh).
|
288_1_3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.