source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2forspeechtospeech
|
.md
|
behavior.
Parameters:
config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: generate
|
283_16_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2fortexttotext
|
.md
|
The text-to-text SeamlessM4Tv2 Model transformer which can be used for T2TT.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model.
|
283_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2fortexttotext
|
.md
|
behavior.
Parameters:
config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- generate
|
283_17_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2forspeechtotext
|
.md
|
The speech-to-text SeamlessM4Tv2 Model transformer which can be used for S2TT.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model.
|
283_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2forspeechtotext
|
.md
|
behavior.
Parameters:
config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- generate
|
283_18_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
This is the configuration class to store the configuration of a [`~SeamlessM4Tv2Model`]. It is used to instantiate
an SeamlessM4Tv2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the SeamlessM4Tv2
[""](https://huggingface.co/"") architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
283_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 256102):
Vocabulary size of the text modality of the SeamlessM4Tv2 model. Defines the number of different tokens
that can be represented by the `inputs_ids` passed when calling [`~SeamlessM4Tv2Model`],
[`~SeamlessM4Tv2ForTextToSpeech`] or [`~SeamlessM4Tv2ForTextToText`].
|
283_19_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
[`~SeamlessM4Tv2ForTextToSpeech`] or [`~SeamlessM4Tv2ForTextToText`].
t2u_vocab_size (`int`, *optional*, defaults to 10082):
Unit vocabulary size of the SeamlessM4Tv2 model. Defines the number of different "unit tokens" that can be
represented by the `inputs_ids` passed when calling the Text-To-Units sub-model of [`~SeamlessM4Tv2Model`],
[`~SeamlessM4Tv2ForSpeechToSpeech`] or [`~SeamlessM4Tv2ForTextToSpeech`].
char_vocab_size (`int`, *optional*, defaults to 10943):
|
283_19_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
char_vocab_size (`int`, *optional*, defaults to 10943):
Character vocabulary size of the SeamlessM4Tv2 model. Defines the number of different character tokens that
can be represented by the `char_inputs_ids` passed when calling the Text-To-Units sub-model of
[`~SeamlessM4Tv2Model`], [`~SeamlessM4Tv2ForSpeechToSpeech`] or [`~SeamlessM4Tv2ForTextToSpeech`].
> Parameters shared across sub-models
hidden_size (`int`, *optional*, defaults to 1024):
|
283_19_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
> Parameters shared across sub-models
hidden_size (`int`, *optional*, defaults to 1024):
Dimensionality of the "intermediate" layers in the architecture.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
|
283_19_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
The epsilon used by the layer normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
max_position_embeddings (`int`, *optional*, defaults to 4096):
The maximum sequence length that this model text encoder and decoder might ever be used with. Typically set
this to something large just in case (e.g., 512 or 1024 or 2048).
is_encoder_decoder (`bool`, *optional*, defaults to `True`):
|
283_19_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
this to something large just in case (e.g., 512 or 1024 or 2048).
is_encoder_decoder (`bool`, *optional*, defaults to `True`):
Whether the model is used as an encoder/decoder or not.
encoder_layerdrop (`float`, *optional*, defaults to 0.05):
The LayerDrop probability for the encoders. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.05):
|
283_19_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.05):
The LayerDrop probability for the decoders. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
activation_function (`str` or `function`, *optional*, defaults to `"relu"`):
The non-linear activation function (function or string) in the decoder and feed-forward layers. If string,
`"gelu"`, `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
|
283_19_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
`"gelu"`, `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, decoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all attention layers.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all activation layers in the model.
|
283_19_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all activation layers in the model.
scale_embedding (`bool`, *optional*, defaults to `True`):
Scale embeddings by diving by sqrt(d_model).
> Text encoder and text decoder specific parameters
encoder_layers (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer text encoder.
encoder_ffn_dim (`int`, *optional*, defaults to 8192):
|
283_19_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
Number of hidden layers in the Transformer text encoder.
encoder_ffn_dim (`int`, *optional*, defaults to 8192):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text encoder.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer text encoder.
decoder_layers (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer text decoder.
|
283_19_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
decoder_layers (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer text decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 8192):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text decoder.
decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer text decoder.
decoder_start_token_id (`int`, *optional*, defaults to 3):
|
283_19_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
decoder_start_token_id (`int`, *optional*, defaults to 3):
If an encoder-decoder model starts decoding with a different token than _bos_, the id of that token. Only
applied in the text decoder.
max_new_tokens (`int`, *optional*, defaults to 256):
The maximum numbers of text tokens to generate, ignoring the number of tokens in the prompt.
pad_token_id (`int`, *optional*, defaults to 0):
The id of the _padding_ text token. Only applied to the text-decoder model.
|
283_19_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
pad_token_id (`int`, *optional*, defaults to 0):
The id of the _padding_ text token. Only applied to the text-decoder model.
bos_token_id (`int`, *optional*, defaults to 2):
The id of the _beginning-of-stream_ text token. Only applied to the text-decoder model.
eos_token_id (`int`, *optional*, defaults to 3):
The id of the _end-of-stream_ text token. Only applied to the text-decoder model.
> Speech encoder specific parameters
speech_encoder_layers (`int`, *optional*, defaults to 24):
|
283_19_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
> Speech encoder specific parameters
speech_encoder_layers (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer speech encoder.
speech_encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer speech encoder.
speech_encoder_intermediate_size (`int`, *optional*, defaults to 4096):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer speech encoder.
|
283_19_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer speech encoder.
speech_encoder_hidden_act (`str` or `function`, *optional*, defaults to `"swish"`):
The non-linear activation function (function or string) in the speech encoder. If string, `"gelu"`,
`"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported.
speech_encoder_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all layers in the speech encoder.
|
283_19_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
speech_encoder_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all layers in the speech encoder.
add_adapter (`bool`, *optional*, defaults to `True`):
Add an adapter layer on top of the speech encoder.
speech_encoder_layerdrop (`float`, *optional*, defaults to 0.1):
The LayerDrop probability for the speech encoder. See the [LayerDrop paper](see
https://arxiv.org/abs/1909.11556) for more details.
feature_projection_input_dim (`int`, *optional*, defaults to 160):
|
283_19_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
https://arxiv.org/abs/1909.11556) for more details.
feature_projection_input_dim (`int`, *optional*, defaults to 160):
Input dimension of the input feature projection of the speech encoder, i.e the dimension after processing
input audios with [`SeamlessM4TFeatureExtractor`].
adaptor_kernel_size (`int`, *optional*, defaults to 8):
Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
adaptor_stride (`int`, *optional*, defaults to 8):
|
283_19_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
adaptor_stride (`int`, *optional*, defaults to 8):
Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
adaptor_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all layers in the speech adapter.
num_adapter_layers (`int`, *optional*, defaults to 1):
Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is
True`.
|
283_19_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is
True`.
position_embeddings_type (`str`, *optional*, defaults to `"relative_key"`):
Can be specified to `relative_key`. If left to `None`, no relative position embedding is applied. Only
applied to the speech encoder. For more information on `"relative_key"`, please refer to [Self-Attention
with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
|
283_19_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
conv_depthwise_kernel_size (`int`, *optional*, defaults to 31):
Kernel size of convolutional depthwise 1D layer in Conformer blocks. Only applied to the speech encoder.
left_max_position_embeddings (`int`, *optional*, defaults to 64):
The left clipping value for relative positions.
right_max_position_embeddings (`int`, *optional*, defaults to 8):
The right clipping value for relative positions.
|
283_19_20
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
right_max_position_embeddings (`int`, *optional*, defaults to 8):
The right clipping value for relative positions.
speech_encoder_chunk_size (`int`, *optional*, defaults to 20000): The size of each attention chunk.
speech_encoder_left_chunk_num (`int`, *optional*, defaults to 128):
Number of chunks on the left up to which lookahead is allowed.
> Text-To-Unit (t2u) model specific parameters
t2u_bos_token_id (`int`, *optional*, defaults to 0):
|
283_19_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
> Text-To-Unit (t2u) model specific parameters
t2u_bos_token_id (`int`, *optional*, defaults to 0):
The id of the _beginning-of-stream_ unit token. Only applied to the text-to-unit seq2seq model.
t2u_pad_token_id (`int`, *optional*, defaults to 1):
The id of the _padding_ unit token. Only applied to the text-to-unit seq2seq model.
t2u_eos_token_id (`int`, *optional*, defaults to 2):
The id of the _end-of-stream_ unit token. Only applied to the text-to-unit seq2seq model.
|
283_19_22
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
The id of the _end-of-stream_ unit token. Only applied to the text-to-unit seq2seq model.
t2u_encoder_layers (`int`, *optional*, defaults to 6):
Number of hidden layers in the Transformer text-to-unit encoder.
t2u_encoder_ffn_dim (`int`, *optional*, defaults to 8192):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text-to-unit encoder.
t2u_encoder_attention_heads (`int`, *optional*, defaults to 16):
|
283_19_23
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
t2u_encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer text-to-unit encoder.
t2u_decoder_layers (`int`, *optional*, defaults to 6):
Number of hidden layers in the Transformer text-to-unit decoder.
t2u_decoder_ffn_dim (`int`, *optional*, defaults to 8192):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text-to-unit decoder.
t2u_decoder_attention_heads (`int`, *optional*, defaults to 16):
|
283_19_24
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
t2u_decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer text-to-unit decoder.
t2u_max_position_embeddings (`int`, *optional*, defaults to 4096):
The maximum sequence length that this model text-to-unit component might ever be used with. Typically set
this to something large just in case (e.g., 512 or 1024 or 2048).
t2u_variance_predictor_embed_dim (`int`, *optional*, defaults to 1024):
|
283_19_25
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
t2u_variance_predictor_embed_dim (`int`, *optional*, defaults to 1024):
The projection dimension of the text-to-unit's duration predictor.
t2u_variance_predictor_hidden_dim (`int`, *optional*, defaults to 256):
Internal dimension of the text-to-unit's duration predictor.
t2u_variance_predictor_kernel_size (`int`, *optional*, defaults to 3):
Kernel size of the convolutional layers of the text-to-unit's duration predictor.
t2u_variance_pred_dropout (`float`, *optional*, defaults to 0.5):
|
283_19_26
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
t2u_variance_pred_dropout (`float`, *optional*, defaults to 0.5):
The dropout probability of the text-to-unit's duration predictor.
> Hifi-Gan Vocoder specific parameters
sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate at which the output audio will be generated, expressed in hertz (Hz).
upsample_initial_channel (`int`, *optional*, defaults to 512):
The number of input channels into the hifi-gan upsampling network. Applies to the vocoder only.
|
283_19_27
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
The number of input channels into the hifi-gan upsampling network. Applies to the vocoder only.
upsample_rates (`Tuple[int]` or `List[int]`, *optional*, defaults to `[5, 4, 4, 2, 2]`):
A tuple of integers defining the stride of each 1D convolutional layer in the vocoder upsampling network.
The length of *upsample_rates* defines the number of convolutional layers and has to match the length of
*upsample_kernel_sizes*. Applies to the vocoder only.
|
283_19_28
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
*upsample_kernel_sizes*. Applies to the vocoder only.
upsample_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[11, 8, 8, 4, 4]`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the vocoder upsampling
network. The length of *upsample_kernel_sizes* defines the number of convolutional layers and has to match
the length of *upsample_rates*. Applies to the vocoder only.
resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 7, 11]`):
|
283_19_29
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 7, 11]`):
A tuple of integers defining the kernel sizes of the vocoder 1D convolutional layers in the multi-receptive
field fusion (MRF) module. Applies to the vocoder only.
resblock_dilation_sizes (`Tuple[Tuple[int]]` or `List[List[int]]`, *optional*, defaults to `[[1, 3, 5], [1, 3, 5], [1, 3, 5]]`):
A nested tuple of integers defining the dilation rates of the vocoder dilated 1D convolutional layers in
|
283_19_30
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
A nested tuple of integers defining the dilation rates of the vocoder dilated 1D convolutional layers in
the multi-receptive field fusion (MRF) module. Applies to the vocoder only.
leaky_relu_slope (`float`, *optional*, defaults to 0.1):
The angle of the negative slope used by the leaky ReLU activation in the vocoder. Applies to the vocoder
only.
unit_hifi_gan_vocab_size (`int`, *optional*, defaults to 10000):
|
283_19_31
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
only.
unit_hifi_gan_vocab_size (`int`, *optional*, defaults to 10000):
Vocabulary size of the SeamlessM4Tv2 vocoder. Defines the number of different unit tokens that can be
represented by the `inputs_ids` passed when calling the vocoder of [`~SeamlessM4Tv2Model`],
[`~SeamlessM4Tv2ForSpeechToSpeech`] or [`~SeamlessM4Tv2ForTextToSpeech`].
unit_embed_dim (`int`, *optional*, defaults to 1280):
The projection dimension of the input ids given to the hifi-gan vocoder. Applies to the vocoder only.
|
283_19_32
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
The projection dimension of the input ids given to the hifi-gan vocoder. Applies to the vocoder only.
lang_embed_dim (`int`, *optional*, defaults to 256):
The projection dimension of the target language given to the hifi-gan vocoder. Applies to the vocoder only.
spkr_embed_dim (`int`, *optional*, defaults to 256):
The projection dimension of the speaker id given to the hifi-gan vocoder. Applies to the vocoder only.
vocoder_num_langs (`int`, *optional*, defaults to 36):
|
283_19_33
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
vocoder_num_langs (`int`, *optional*, defaults to 36):
Number of langs supported by the vocoder. Might be different from `t2u_num_langs`.
vocoder_num_spkrs (`int`, *optional*, defaults to 200):
Number of speakers supported by the vocoder.
variance_predictor_kernel_size (`int`, *optional*, defaults to 3):
Kernel size of the duration predictor. Applies to the vocoder only.
var_pred_dropout (`float`, *optional*, defaults to 0.5):
The dropout probability of the duration predictor. Applies to the vocoder only.
|
283_19_34
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
The dropout probability of the duration predictor. Applies to the vocoder only.
vocoder_offset (`int`, *optional*, defaults to 4):
Offset the unit token ids by this number to account for symbol tokens. Applies to the vocoder only.
```python
>>> from transformers import SeamlessM4Tv2Model, SeamlessM4Tv2Config
|
283_19_35
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
|
.md
|
>>> # Initializing a SeamlessM4Tv2 "" style configuration
>>> configuration = SeamlessM4Tv2Config()
>>> # Initializing a model from the "" style configuration
>>> model = SeamlessM4Tv2Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
283_19_36
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
284_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
284_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#overview
|
.md
|
The ViTMSN model was proposed in [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes,
Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. The paper presents a joint-embedding architecture to match the prototypes
of masked patches with that of the unmasked patches. With this setup, their method yields excellent performance in the low-shot and extreme low-shot
regimes.
|
284_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#overview
|
.md
|
regimes.
The abstract from the paper is the following:
*We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our
approach matches the representation of an image view containing randomly masked patches to the representation of the original
unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the
|
284_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#overview
|
.md
|
unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures,
while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance,
on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy,
and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark.*
|
284_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#overview
|
.md
|
<img src="https://i.ibb.co/W6PQMdC/Screenshot-2022-09-13-at-9-08-40-AM.png" alt="drawing" width="600"/>
<small> MSN architecture. Taken from the <a href="https://arxiv.org/abs/2204.07141">original paper.</a> </small>
This model was contributed by [sayakpaul](https://huggingface.co/sayakpaul). The original code can be found [here](https://github.com/facebookresearch/msn).
|
284_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#usage-tips
|
.md
|
- MSN (masked siamese networks) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training
objective is to match the prototypes assigned to the unmasked views of the images to that of the masked views of the same images.
- The authors have only released pre-trained weights of the backbone (ImageNet-1k pre-training). So, to use that on your own image classification dataset,
use the [`ViTMSNForImageClassification`] class which is initialized from [`ViTMSNModel`]. Follow
|
284_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#usage-tips
|
.md
|
use the [`ViTMSNForImageClassification`] class which is initialized from [`ViTMSNModel`]. Follow
[this notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) for a detailed tutorial on fine-tuning.
- MSN is particularly useful in the low-shot and extreme low-shot regimes. Notably, it achieves 75.7% top-1 accuracy with only 1% of ImageNet-1K
labels when fine-tuned.
|
284_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#using-scaled-dot-product-attention-sdpa
|
.md
|
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.
|
284_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#using-scaled-dot-product-attention-sdpa
|
.md
|
page for more information.
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
```
from transformers import ViTMSNForImageClassification
model = ViTMSNForImageClassification.from_pretrained("facebook/vit-msn-base", attn_implementation="sdpa", torch_dtype=torch.float16)
...
```
|
284_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#using-scaled-dot-product-attention-sdpa
|
.md
|
...
```
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `facebook/vit-msn-base` model, we saw the following speedups during inference.
| Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) |
|
284_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#using-scaled-dot-product-attention-sdpa
|
.md
|
|--------------|-------------------------------------------|-------------------------------------------|------------------------------|
| 1 | 7 | 6 | 1.17 |
| 2 | 8 | 6 | 1.33 |
|
284_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#using-scaled-dot-product-attention-sdpa
|
.md
|
| 4 | 8 | 6 | 1.33 |
| 8 | 8 | 6 | 1.33 |
|
284_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT MSN.
<PipelineTag pipeline="image-classification"/>
- [`ViTMSNForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
284_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#resources
|
.md
|
- See also: [Image classification task guide](../tasks/image_classification)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
284_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnconfig
|
.md
|
This is the configuration class to store the configuration of a [`ViTMSNModel`]. It is used to instantiate an ViT
MSN model according to the specified arguments, defining the model architecture. Instantiating a configuration with
the defaults will yield a similar configuration to that of the ViT
[facebook/vit_msn_base](https://huggingface.co/facebook/vit_msn_base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
284_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
|
284_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
|
284_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnconfig
|
.md
|
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
|
284_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnconfig
|
.md
|
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
|
284_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnconfig
|
.md
|
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries, keys and values.
Example:
```python
>>> from transformers import ViTMSNModel, ViTMSNConfig
|
284_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnconfig
|
.md
|
>>> # Initializing a ViT MSN vit-msn-base style configuration
>>> configuration = ViTConfig()
>>> # Initializing a model from the vit-msn-base style configuration
>>> model = ViTMSNModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
284_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnmodel
|
.md
|
The bare ViTMSN Model outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ViTMSNConfig`]): Model configuration class with all the parameters of the model.
|
284_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnmodel
|
.md
|
behavior.
Parameters:
config ([`ViTMSNConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
284_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnforimageclassification
|
.md
|
ViTMSN Model with an image classification head on top e.g. for ImageNet.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ViTMSNConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
284_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnforimageclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
284_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/
|
.md
|
<!--
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
285_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
285_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#overview
|
.md
|
The OLMoE model was proposed in [OLMoE: Open Mixture-of-Experts Language Models](https://arxiv.org/abs/2409.02060) by Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, Sewon Min, Weijia Shi, Pete Walsh, Oyvind Tafjord, Nathan Lambert, Yuling Gu, Shane Arora, Akshita Bhagia, Dustin Schwenk, David Wadden, Alexander Wettig, Binyuan Hui, Tim Dettmers, Douwe Kiela, Ali Farhadi, Noah A. Smith, Pang Wei Koh, Amanpreet Singh, Hannaneh Hajishirzi.
|
285_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#overview
|
.md
|
OLMoE is a series of **O**pen **L**anguage **Mo**dels using sparse **M**ixture-**o**f-**E**xperts designed to enable the science of language models. We release all code, checkpoints, logs, and details involved in training these models.
The abstract from the paper is the following:
|
285_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#overview
|
.md
|
*We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain it on 5 trillion tokens and further adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available models with similar active parameters, even surpassing larger ones like Llama2-13B-Chat and DeepSeekMoE-16B. We present various experiments on MoE training, analyze routing in our model showing high
|
285_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#overview
|
.md
|
Llama2-13B-Chat and DeepSeekMoE-16B. We present various experiments on MoE training, analyze routing in our model showing high specialization, and open-source all aspects of our work: model weights, training data, code, and logs.*
|
285_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#overview
|
.md
|
This model was contributed by [Muennighoff](https://hf.co/Muennighoff).
The original code can be found [here](https://github.com/allenai/OLMoE).
|
285_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
This is the configuration class to store the configuration of a [`OlmoeModel`]. It is used to instantiate an OLMoE
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the [allenai/OLMoE-1B-7B-0924](https://huggingface.co/allenai/OLMoE-1B-7B-0924).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
285_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50304):
Vocabulary size of the OLMoE model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`OlmoeModel`]
hidden_size (`int`, *optional*, defaults to 2048):
Dimension of the hidden representations.
|
285_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 2048):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 2048):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 16):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
|
285_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
285_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
285_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 4096):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-05):
|
285_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
rms_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*, defaults to 1):
Padding token id.
bos_token_id (`int`, *optional*):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 50279):
End of stream token id.
|
285_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 50279):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
|
285_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
`max_position_embeddings` to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
|
285_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
285_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
clip_qkv (`float`, *optional*):
If not `None`, elements of query, key and value attention states are clipped so that their
absolute value does not exceed this value.
num_experts_per_tok (`int`, *optional*, defaults to 8):
Number of selected experts.
num_experts (`int`, *optional*, defaults to 64):
Number of routed experts.
output_router_logits (`bool`, *optional*, defaults to `False`):
|
285_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
Number of routed experts.
output_router_logits (`bool`, *optional*, defaults to `False`):
Whether or not the router logits should be returned by the model. Enabeling this will also
allow the model to output the auxiliary loss, including load balancing loss and router z-loss.
router_aux_loss_coef (`float`, *optional*, defaults to 0.01):
The aux loss factor for the total loss.
norm_topk_prob (`bool`, *optional*, defaults to `False`):
Whether to normalize the topk probabilities.
```python
|
285_2_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
norm_topk_prob (`bool`, *optional*, defaults to `False`):
Whether to normalize the topk probabilities.
```python
>>> from transformers import OlmoeModel, OlmoeConfig
|
285_2_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
|
.md
|
>>> # Initializing a OLMoE 7B A1B style configuration
>>> configuration = OlmoeConfig()
>>> # Initializing a model from the OLMoE 7B A1B style configuration
>>> model = OlmoeModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
285_2_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoemodel
|
.md
|
The bare Olmoe Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
285_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoemodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`OlmoeConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
285_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoemodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`OlmoeDecoderLayer`]
Args:
config: OlmoeConfig
Methods: forward
|
285_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeforcausallm
|
.md
|
No docstring available for OlmoeForCausalLM
Methods: forward
|
285_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
286_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
286_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#overview
|
.md
|
The OneFormer model was proposed in [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. OneFormer is a universal image segmentation framework that can be trained on a single panoptic dataset to perform semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and
|
286_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#overview
|
.md
|
OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference.
|
286_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#overview
|
.md
|
<img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/oneformer_teaser.png"/>
The abstract from the paper is the following:
|
286_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#overview
|
.md
|
*Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation in the last decades include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance
|
286_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#overview
|
.md
|
to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies segmentation with a multi-task train-once design. We first propose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training
|
286_1_4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.