source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
281_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2):
281_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`RoCBertModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. is_decoder (`bool`, *optional*, defaults to `False`):
281_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
The epsilon used by the layer normalization layers. is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
281_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
relevant if `config.is_decoder=True`. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
281_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. enable_pronunciation (`bool`, *optional*, defaults to `True`):
281_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
The dropout ratio for the classification head. enable_pronunciation (`bool`, *optional*, defaults to `True`): Whether or not the model use pronunciation embed when training. enable_shape (`bool`, *optional*, defaults to `True`): Whether or not the model use shape embed when training. pronunciation_embed_dim (`int`, *optional*, defaults to 768): Dimension of the pronunciation_embed. pronunciation_vocab_size (`int`, *optional*, defaults to 910):
281_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
Dimension of the pronunciation_embed. pronunciation_vocab_size (`int`, *optional*, defaults to 910): Pronunciation Vocabulary size of the RoCBert model. Defines the number of different tokens that can be represented by the `input_pronunciation_ids` passed when calling [`RoCBertModel`]. shape_embed_dim (`int`, *optional*, defaults to 512): Dimension of the shape_embed. shape_vocab_size (`int`, *optional*, defaults to 24858):
281_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
Dimension of the shape_embed. shape_vocab_size (`int`, *optional*, defaults to 24858): Shape Vocabulary size of the RoCBert model. Defines the number of different tokens that can be represented by the `input_shape_ids` passed when calling [`RoCBertModel`]. concat_input (`bool`, *optional*, defaults to `True`): Defines the way of merging the shape_embed, pronunciation_embed and word_embed, if the value is true, output_embed = torch.cat((word_embed, shape_embed, pronunciation_embed), -1), else output_embed =
281_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
output_embed = torch.cat((word_embed, shape_embed, pronunciation_embed), -1), else output_embed = (word_embed + shape_embed + pronunciation_embed) / 3 Example: ```python >>> from transformers import RoCBertModel, RoCBertConfig
281_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertconfig
.md
>>> # Initializing a RoCBert weiweishi/roc-bert-base-zh style configuration >>> configuration = RoCBertConfig() >>> # Initializing a model from the weiweishi/roc-bert-base-zh style configuration >>> model = RoCBertModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` Methods: all
281_3_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocberttokenizer
.md
Args: Construct a RoCBert tokenizer. Based on WordPiece. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. vocab_file (`str`): File containing the vocabulary. word_shape_file (`str`): File containing the word => shape info. word_pronunciation_file (`str`): File containing the word => pronunciation info. do_lower_case (`bool`, *optional*, defaults to `True`):
281_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocberttokenizer
.md
File containing the word => pronunciation info. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`): Whether or not to do basic tokenization before WordPiece. never_split (`Iterable`, *optional*): Collection of tokens which will never be split during tokenization. Only has an effect when `do_basic_tokenize=True` unk_token (`str`, *optional*, defaults to `"[UNK]"`):
281_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocberttokenizer
.md
`do_basic_tokenize=True` unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last
281_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocberttokenizer
.md
sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence
281_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocberttokenizer
.md
The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
281_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocberttokenizer
.md
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this [issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT). Methods: build_inputs_with_special_tokens - get_special_tokens_mask
281_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocberttokenizer
.md
value for `lowercase` (as in the original BERT). Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
281_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertmodel
.md
The bare RoCBert Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model.
281_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertmodel
.md
behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
281_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertmodel
.md
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
281_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertmodel
.md
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to be initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. Methods: forward
281_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertforpretraining
.md
RoCBert Model with contrastive loss and masked_lm_loss during the pretraining. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model.
281_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertforpretraining
.md
behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
281_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertforcausallm
.md
RoCBert Model with a `language modeling` head on top for CLM fine-tuning. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
281_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertforcausallm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
281_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertformaskedlm
.md
RoCBert Model with a `language modeling` head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
281_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertformaskedlm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
281_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertforsequenceclassification
.md
RoCBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model.
281_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertforsequenceclassification
.md
behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
281_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertformultiplechoice
.md
RoCBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model.
281_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertformultiplechoice
.md
behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
281_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertfortokenclassification
.md
RoCBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model.
281_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertfortokenclassification
.md
behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
281_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertforquestionanswering
.md
RoCBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
281_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertforquestionanswering
.md
behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
281_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
282_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
282_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#overview
.md
The SwiftFormer model was proposed in [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.
282_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#overview
.md
The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called 'SwiftFormer' is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more
282_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#overview
.md
speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.
282_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#overview
.md
The abstract from the paper is the following:
282_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#overview
.md
*Self-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain
282_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#overview
.md
self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient
282_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#overview
.md
can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called "SwiftFormer" which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is
282_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#overview
.md
inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2x faster compared to MobileViT-v2.*
282_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#overview
.md
This model was contributed by [shehan97](https://huggingface.co/shehan97). The TensorFlow version was contributed by [joaocmd](https://huggingface.co/joaocmd). The original code can be found [here](https://github.com/Amshaker/SwiftFormer).
282_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerconfig
.md
This is the configuration class to store the configuration of a [`SwiftFormerModel`]. It is used to instantiate an SwiftFormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SwiftFormer [MBZUAI/swiftformer-xs](https://huggingface.co/MBZUAI/swiftformer-xs) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
282_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image num_channels (`int`, *optional*, defaults to 3): The number of input channels depths (`List[int]`, *optional*, defaults to `[3, 3, 6, 4]`): Depth of each stage embed_dims (`List[int]`, *optional*, defaults to `[48, 56, 112, 220]`):
282_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerconfig
.md
Depth of each stage embed_dims (`List[int]`, *optional*, defaults to `[48, 56, 112, 220]`): The embedding dimension at each stage mlp_ratio (`int`, *optional*, defaults to 4): Ratio of size of the hidden dimensionality of an MLP to the dimensionality of its input. downsamples (`List[bool]`, *optional*, defaults to `[True, True, True, True]`): Whether or not to downsample inputs between two stages. hidden_act (`str`, *optional*, defaults to `"gelu"`):
282_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerconfig
.md
Whether or not to downsample inputs between two stages. hidden_act (`str`, *optional*, defaults to `"gelu"`): The non-linear activation function (string). `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. down_patch_size (`int`, *optional*, defaults to 3): The size of patches in downsampling layers. down_stride (`int`, *optional*, defaults to 2): The stride of convolution kernels in downsampling layers. down_pad (`int`, *optional*, defaults to 1): Padding in downsampling layers.
282_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerconfig
.md
down_pad (`int`, *optional*, defaults to 1): Padding in downsampling layers. drop_path_rate (`float`, *optional*, defaults to 0.0): Rate at which to increase dropout probability in DropPath. drop_mlp_rate (`float`, *optional*, defaults to 0.0): Dropout rate for the MLP component of SwiftFormer. drop_conv_encoder_rate (`float`, *optional*, defaults to 0.0): Dropout rate for the ConvEncoder component of SwiftFormer. use_layer_scale (`bool`, *optional*, defaults to `True`):
282_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerconfig
.md
Dropout rate for the ConvEncoder component of SwiftFormer. use_layer_scale (`bool`, *optional*, defaults to `True`): Whether to scale outputs from token mixers. layer_scale_init_value (`float`, *optional*, defaults to 1e-05): Factor by which outputs from token mixers are scaled. batch_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the batch normalization layers. Example: ```python >>> from transformers import SwiftFormerConfig, SwiftFormerModel
282_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerconfig
.md
>>> # Initializing a SwiftFormer swiftformer-base-patch16-224 style configuration >>> configuration = SwiftFormerConfig() >>> # Initializing a model (with random weights) from the swiftformer-base-patch16-224 style configuration >>> model = SwiftFormerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
282_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformermodel
.md
The bare SwiftFormer Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwiftFormerConfig`]): Model configuration class with all the parameters of the model.
282_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformermodel
.md
behavior. Parameters: config ([`SwiftFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
282_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerforimageclassification
.md
SwiftFormer Model transformer with an image classification head on top (e.g. for ImageNet). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwiftFormerConfig`]): Model configuration class with all the parameters of the model.
282_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerforimageclassification
.md
behavior. Parameters: config ([`SwiftFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
282_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#tfswiftformermodel
.md
No docstring available for TFSwiftFormerModel Methods: call
282_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#tfswiftformerforimageclassification
.md
No docstring available for TFSwiftFormerForImageClassification Methods: call
282_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
283_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
283_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
.md
The SeamlessM4T-v2 model was proposed in [Seamless: Multilingual Expressive and Streaming Speech Translation](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/) by the Seamless Communication team from Meta AI.
283_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
.md
SeamlessM4T-v2 is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. It is an improvement on the [previous version](https://huggingface.co/docs/transformers/main/model_doc/seamless_m4t). For more details on the differences between v1 and v2, refer to section [Difference with SeamlessM4T-v1](#difference-with-seamlessm4t-v1).
283_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
.md
SeamlessM4T-v2 enables multiple tasks without relying on separate models: - Speech-to-speech translation (S2ST) - Speech-to-text translation (S2TT) - Text-to-speech translation (T2ST) - Text-to-text translation (T2TT) - Automatic speech recognition (ASR) [`SeamlessM4Tv2Model`] can perform all the above tasks, but each task also has its own dedicated sub-model. The abstract from the paper is the following:
283_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
.md
*Recent advancements in automatic speech translation have dramatically expanded language coverage, improved multimodal capabilities, and enabled a wide range of tasks and functionalities. That said, large-scale automatic speech translation systems today lack key features that help machine-mediated communication feel seamless when compared to human-to-human dialogue. In this work, we introduce a family of models that enable end-to-end expressive and multilingual translations in a streaming fashion. First,
283_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
.md
we introduce a family of models that enable end-to-end expressive and multilingual translations in a streaming fashion. First, we contribute an improved version of the massively multilingual and multimodal SeamlessM4T model—SeamlessM4T v2. This newer model, incorporating an updated UnitY2 framework, was trained on more low-resource language data. The expanded version of SeamlessAlign adds 114,800 hours of automatically aligned data for a total of 76 languages. SeamlessM4T v2 provides the foundation on
283_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
.md
adds 114,800 hours of automatically aligned data for a total of 76 languages. SeamlessM4T v2 provides the foundation on which our two newest models, SeamlessExpressive and SeamlessStreaming, are initiated. SeamlessExpressive enables translation that preserves vocal styles and prosody. Compared to previous efforts in expressive speech research, our work addresses certain underexplored aspects of prosody, such as speech rate and pauses, while also preserving the style of one’s voice. As for
283_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
.md
underexplored aspects of prosody, such as speech rate and pauses, while also preserving the style of one’s voice. As for SeamlessStreaming, our model leverages the Efficient Monotonic Multihead Attention (EMMA) mechanism to generate low-latency target translations without waiting for complete source utterances. As the first of its kind, SeamlessStreaming enables simultaneous speech-to-speech/text translation for multiple source and target languages. To understand the performance of these models, we
283_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
.md
speech-to-speech/text translation for multiple source and target languages. To understand the performance of these models, we combined novel and modified versions of existing automatic metrics to evaluate prosody, latency, and robustness. For human evaluations, we adapted existing protocols tailored for measuring the most relevant attributes in the preservation of meaning, naturalness, and expressivity. To ensure that our models can be used safely and responsibly, we implemented the first known red-teaming
283_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
.md
and expressivity. To ensure that our models can be used safely and responsibly, we implemented the first known red-teaming effort for multimodal machine translation, a system for the detection and mitigation of added toxicity, a systematic evaluation of gender bias, and an inaudible localized watermarking mechanism designed to dampen the impact of deepfakes. Consequently, we bring major components from SeamlessExpressive and SeamlessStreaming together to form Seamless, the first publicly available system
283_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
.md
major components from SeamlessExpressive and SeamlessStreaming together to form Seamless, the first publicly available system that unlocks expressive cross-lingual communication in real-time. In sum, Seamless gives us a pivotal look at the technical foundation needed to turn the Universal Speech Translator from a science fiction concept into a real-world technology. Finally, contributions in this work—including models, code, and a watermark detector—are publicly released and accessible at the link below.*
283_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#usage
.md
In the following example, we'll load an Arabic audio sample and an English text sample and convert them into Russian speech and French text. First, load the processor and a checkpoint of the model: ```python >>> from transformers import AutoProcessor, SeamlessM4Tv2Model
283_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#usage
.md
>>> processor = AutoProcessor.from_pretrained("facebook/seamless-m4t-v2-large") >>> model = SeamlessM4Tv2Model.from_pretrained("facebook/seamless-m4t-v2-large") ``` You can seamlessly use this model on text or on audio, to generated either translated text or translated audio. Here is how to use the processor to process text and audio: ```python >>> # let's load an audio sample from an Arabic speech corpus >>> from datasets import load_dataset
283_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#usage
.md
```python >>> # let's load an audio sample from an Arabic speech corpus >>> from datasets import load_dataset >>> dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True) >>> audio_sample = next(iter(dataset))["audio"]
283_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#usage
.md
>>> # now, process it >>> audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt") >>> # now, process some English text as well >>> text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt") ```
283_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#speech
.md
[`SeamlessM4Tv2Model`] can *seamlessly* generate text or speech with few or no changes. Let's target Russian voice translation: ```python >>> audio_array_from_text = model.generate(**text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() >>> audio_array_from_audio = model.generate(**audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() ``` With basically the same code, I've translated English text and Arabic speech to Russian speech samples.
283_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#text
.md
Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass `generate_speech=False` to [`SeamlessM4Tv2Model.generate`]. This time, let's translate to French. ```python >>> # from audio >>> output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_audio = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True)
283_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#text
.md
>>> # from text >>> output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_text = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) ```
283_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#1-use-dedicated-models
.md
[`SeamlessM4Tv2Model`] is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint. For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code: ```python >>> from transformers import SeamlessM4Tv2ForSpeechToSpeech
283_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#1-use-dedicated-models
.md
```python >>> from transformers import SeamlessM4Tv2ForSpeechToSpeech >>> model = SeamlessM4Tv2ForSpeechToSpeech.from_pretrained("facebook/seamless-m4t-v2-large") ``` Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove `generate_speech=False`. ```python >>> from transformers import SeamlessM4Tv2ForTextToText >>> model = SeamlessM4Tv2ForTextToText.from_pretrained("facebook/seamless-m4t-v2-large") ```
283_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#1-use-dedicated-models
.md
>>> model = SeamlessM4Tv2ForTextToText.from_pretrained("facebook/seamless-m4t-v2-large") ``` Feel free to try out [`SeamlessM4Tv2ForSpeechToText`] and [`SeamlessM4Tv2ForTextToSpeech`] as well.
283_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#2-change-the-speaker-identity
.md
You have the possibility to change the speaker used for speech synthesis with the `speaker_id` argument. Some `speaker_id` works better than other for some languages!
283_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#3-change-the-generation-strategy
.md
You can use different [generation strategies](../generation_strategies) for text generation, e.g `.generate(input_ids=input_ids, text_num_beams=4, text_do_sample=True)` which will perform multinomial beam-search decoding on the text model. Note that speech generation only supports greedy - by default - or multinomial sampling, which can be used with e.g. `.generate(..., speech_do_sample=True, speech_temperature=0.6)`.
283_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#4-generate-speech-and-text-at-the-same-time
.md
Use `return_intermediate_token_ids=True` with [`SeamlessM4Tv2Model`] to return both speech and text !
283_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#model-architecture
.md
SeamlessM4T-v2 features a versatile architecture that smoothly handles the sequential generation of text and speech. This setup comprises two sequence-to-sequence (seq2seq) models. The first model translates the input modality into translated text, while the second model generates speech tokens, known as "unit tokens," from the translated text.
283_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#model-architecture
.md
Each modality has its own dedicated encoder with a unique architecture. Additionally, for speech output, a vocoder inspired by the [HiFi-GAN](https://arxiv.org/abs/2010.05646) architecture is placed on top of the second seq2seq model.
283_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#difference-with-seamlessm4t-v1
.md
The architecture of this new version differs from the first in a few aspects:
283_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#improvements-on-the-second-pass-model
.md
The second seq2seq model, named text-to-unit model, is now non-auto regressive, meaning that it computes units in a **single forward pass**. This achievement is made possible by: - the use of **character-level embeddings**, meaning that each character of the predicted translated text has its own embeddings, which are then used to predict the unit tokens. - the use of an intermediate duration predictor, that predicts speech duration at the **character-level** on the predicted translated text.
283_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#improvements-on-the-second-pass-model
.md
- the use of a new text-to-unit decoder mixing convolutions and self-attention to handle longer context.
283_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#difference-in-the-speech-encoder
.md
The speech encoder, which is used during the first-pass generation process to predict the translated text, differs mainly from the previous speech encoder through these mechanisms: - the use of chunked attention mask to prevent attention across chunks, ensuring that each position attends only to positions within its own chunk and a fixed number of previous chunks.
283_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#difference-in-the-speech-encoder
.md
- the use of relative position embeddings which only considers distance between sequence elements rather than absolute positions. Please refer to [Self-Attentionwith Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155) for more details. - the use of a causal depth-wise convolution instead of a non-causal one.
283_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#generation-process
.md
Here's how the generation process works: - Input text or speech is processed through its specific encoder. - A decoder creates text tokens in the desired language. - If speech generation is required, the second seq2seq model, generates unit tokens in an non auto-regressive way. - These unit tokens are then passed through the final vocoder to produce the actual speech.
283_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#generation-process
.md
- These unit tokens are then passed through the final vocoder to produce the actual speech. This model was contributed by [ylacombe](https://huggingface.co/ylacombe). The original code can be found [here](https://github.com/facebookresearch/seamless_communication).
283_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2model
.md
The original SeamlessM4Tv2 Model transformer which can be used for every tasks available (S2ST, S2TT, T2TT, T2ST). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model.
283_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2model
.md
behavior. Parameters: config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. current_modality (`str`, *optional*, defaults to `"text"`): Default modality. Used only to initialize the model. It can be set to `"text"` or `"speech"`.
283_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2model
.md
Default modality. Used only to initialize the model. It can be set to `"text"` or `"speech"`. This will be updated automatically according to the modality passed to the forward and generate passes (`input_ids` for text and `input_features` for audio). Methods: generate
283_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2fortexttospeech
.md
The text-to-speech SeamlessM4Tv2 Model transformer which can be used for T2ST. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model.
283_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2fortexttospeech
.md
behavior. Parameters: config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: generate
283_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2forspeechtospeech
.md
The speech-to-speech SeamlessM4Tv2 Model transformer which can be used for S2ST. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model.
283_16_0