source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertforquestionanswering
.md
RemBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
220_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertforquestionanswering
.md
behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
220_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#tfrembertmodel
.md
No docstring available for TFRemBertModel Methods: call
220_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#tfrembertformaskedlm
.md
No docstring available for TFRemBertForMaskedLM Methods: call
220_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#tfrembertforcausallm
.md
No docstring available for TFRemBertForCausalLM Methods: call
220_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#tfrembertforsequenceclassification
.md
No docstring available for TFRemBertForSequenceClassification Methods: call
220_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#tfrembertformultiplechoice
.md
No docstring available for TFRemBertForMultipleChoice Methods: call
220_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#tfrembertfortokenclassification
.md
No docstring available for TFRemBertForTokenClassification Methods: call
220_19_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#tfrembertforquestionanswering
.md
No docstring available for TFRemBertForQuestionAnswering Methods: call </tf> </frameworkcontent>
220_20_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/
.md
<!--Copyright 2024 The Qwen Team and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
221_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/
.md
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
221_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#overview
.md
Qwen2MoE is the new model series of large language models from the Qwen team. Previously, we released the Qwen series, including Qwen-72B, Qwen-1.8B, Qwen-VL, Qwen-Audio, etc.
221_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#model-details
.md
Qwen2MoE is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. Qwen2MoE has the following architectural choices: - Qwen2MoE is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
221_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#model-details
.md
- Qwen2MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while it achieves comparable performance with `Qwen1.5-7B`, with only 25% of the training resources. For more details refer to the [release blog post](https://qwenlm.github.io/blog/qwen-moe/).
221_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#usage-tips
.md
`Qwen1.5-MoE-A2.7B` and `Qwen1.5-MoE-A2.7B-Chat` can be found on the [Huggingface Hub](https://huggingface.co/Qwen) In the following, we demonstrate how to use `Qwen1.5-MoE-A2.7B-Chat` for the inference. Note that we have used the ChatML format for dialog, in this demo we show how to leverage `apply_chat_template` for this purpose. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> device = "cuda" # the device to load the model onto
221_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#usage-tips
.md
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat") >>> prompt = "Give me a short introduction to large language model." >>> messages = [{"role": "user", "content": prompt}] >>> text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) >>> model_inputs = tokenizer([text], return_tensors="pt").to(device)
221_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#usage-tips
.md
>>> model_inputs = tokenizer([text], return_tensors="pt").to(device) >>> generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True) >>> generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)] >>> response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ```
221_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
This is the configuration class to store the configuration of a [`Qwen2MoeModel`]. It is used to instantiate a Qwen2MoE model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of Qwen1.5-MoE-A2.7B" [Qwen/Qwen1.5-MoE-A2.7B"](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B"). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
221_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 151936): Vocabulary size of the Qwen2MoE model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Qwen2MoeModel`] hidden_size (`int`, *optional*, defaults to 2048): Dimension of the hidden representations.
221_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
hidden_size (`int`, *optional*, defaults to 2048): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 5632): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. num_key_value_heads (`int`, *optional*, defaults to 16):
221_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
num_key_value_heads (`int`, *optional*, defaults to 16): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
221_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `32`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 32768):
221_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
max_position_embeddings (`int`, *optional*, defaults to 32768): The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`):
221_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*):
221_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value accordingly. Expected contents: `rope_type` (`str`): The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope', 'llama3'], with 'default' being the original RoPE implementation.
221_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
'llama3'], with 'default' being the original RoPE implementation. `factor` (`float`, *optional*): Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In most scaling types, a `factor` of x will enable the model to handle sequences of length x * original maximum pre-trained length. `original_max_position_embeddings` (`int`, *optional*): Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining.
221_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining. `attention_factor` (`float`, *optional*): Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention computation. If unspecified, it defaults to value recommended by the implementation, using the `factor` field to infer the suggested value. `beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
221_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
`beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear ramp function. If unspecified, it defaults to 32. `beta_slow` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (<
221_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
`short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
221_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `low_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE `high_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE use_sliding_window (`bool`, *optional*, defaults to `False`):
221_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
use_sliding_window (`bool`, *optional*, defaults to `False`): Whether to use sliding window attention. sliding_window (`int`, *optional*, defaults to 4096): Sliding window attention (SWA) window size. If not specified, will default to `4096`. max_window_layers (`int`, *optional*, defaults to 28): The number of layers that use SWA (Sliding Window Attention). The bottom layers use SWA while the top use full attention. attention_dropout (`float`, *optional*, defaults to 0.0):
221_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. decoder_sparse_step (`int`, *optional*, defaults to 1): The frequency of the MoE layer. moe_intermediate_size (`int`, *optional*, defaults to 1408): Intermediate size of the routed expert. shared_expert_intermediate_size (`int`, *optional*, defaults to 5632): Intermediate size of the shared expert. num_experts_per_tok (`int`, *optional*, defaults to 4): Number of selected experts.
221_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
Intermediate size of the shared expert. num_experts_per_tok (`int`, *optional*, defaults to 4): Number of selected experts. num_experts (`int`, *optional*, defaults to 60): Number of routed experts. norm_topk_prob (`bool`, *optional*, defaults to `False`): Whether to normalize the topk probabilities. output_router_logits (`bool`, *optional*, defaults to `False`): Whether or not the router logits should be returned by the model. Enabeling this will also
221_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
Whether or not the router logits should be returned by the model. Enabeling this will also allow the model to output the auxiliary loss, including load balancing loss and router z-loss. router_aux_loss_coef (`float`, *optional*, defaults to 0.001): The aux loss factor for the total loss. mlp_only_layers (`List[int]`, *optional*, defaults to `[]`): Indicate which layers use Qwen2MoeMLP rather than Qwen2MoeSparseMoeBlock The list contains layer index, from 0 to num_layers-1 if we have num_layers layers
221_4_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
The list contains layer index, from 0 to num_layers-1 if we have num_layers layers If `mlp_only_layers` is empty, `decoder_sparse_step` is used to determine the sparsity. ```python >>> from transformers import Qwen2MoeModel, Qwen2MoeConfig
221_4_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeconfig
.md
>>> # Initializing a Qwen2MoE style configuration >>> configuration = Qwen2MoeConfig() >>> # Initializing a model from the Qwen1.5-MoE-A2.7B" style configuration >>> model = Qwen2MoeModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
221_4_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moemodel
.md
The bare Qwen2MoE Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
221_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moemodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Qwen2MoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
221_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moemodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Qwen2MoeDecoderLayer`] Args: config: Qwen2MoeConfig Methods: forward
221_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeforcausallm
.md
No docstring available for Qwen2MoeForCausalLM Methods: forward
221_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeforsequenceclassification
.md
The Qwen2MoE Model transformer with a sequence classification head on top (linear layer). [`Qwen2MoeForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
221_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeforsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
221_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeforsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
221_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeforsequenceclassification
.md
and behavior. Parameters: config ([`Qwen2MoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
221_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moefortokenclassification
.md
The Qwen2MoE Model transformer with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
221_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moefortokenclassification
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Qwen2MoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
221_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moefortokenclassification
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
221_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeforquestionanswering
.md
The Qwen2MoE Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
221_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Qwen2MoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
221_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_moe.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_moe/#qwen2moeforquestionanswering
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
221_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
222_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
222_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#overview
.md
The SeamlessM4T model was proposed in [SeamlessM4T — Massively Multilingual & Multimodal Machine Translation](https://dl.fbaipublicfiles.com/seamless/seamless_m4t_paper.pdf) by the Seamless Communication team from Meta AI. This is the **version 1** release of the model. For the updated **version 2** release, refer to the [Seamless M4T v2 docs](https://huggingface.co/docs/transformers/main/model_doc/seamless_m4t_v2).
222_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#overview
.md
SeamlessM4T is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. SeamlessM4T enables multiple tasks without relying on separate models: - Speech-to-speech translation (S2ST) - Speech-to-text translation (S2TT) - Text-to-speech translation (T2ST) - Text-to-text translation (T2TT) - Automatic speech recognition (ASR)
222_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#overview
.md
- Text-to-speech translation (T2ST) - Text-to-text translation (T2TT) - Automatic speech recognition (ASR) [`SeamlessM4TModel`] can perform all the above tasks, but each task also has its own dedicated sub-model. The abstract from the paper is the following:
222_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#overview
.md
*What does it take to create the Babel Fish, a tool that can help individuals translate speech between any two languages? While recent breakthroughs in text-based models have pushed machine translation coverage beyond 200 languages, unified speech-to-speech translation models have yet to achieve similar strides. More specifically, conventional speech-to-speech translation systems rely on cascaded systems that perform translation progressively, putting high-performing unified systems out of reach. To
222_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#overview
.md
rely on cascaded systems that perform translation progressively, putting high-performing unified systems out of reach. To address these gaps, we introduce SeamlessM4T, a single model that supports speech-to-speech translation, speech-to-text translation, text-to-speech translation, text-to-text translation, and automatic speech recognition for up to 100 languages. To build this, we used 1 million hours of open speech audio data to learn self-supervised speech representations with w2v-BERT 2.0.
222_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#overview
.md
this, we used 1 million hours of open speech audio data to learn self-supervised speech representations with w2v-BERT 2.0. Subsequently, we created a multimodal corpus of automatically aligned speech translations. Filtered and combined with human-labeled and pseudo-labeled data, we developed the first multilingual system capable of translating from and into English for both speech and text. On FLEURS, SeamlessM4T sets a new standard for translations into multiple target languages, achieving an improvement
222_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#overview
.md
and text. On FLEURS, SeamlessM4T sets a new standard for translations into multiple target languages, achieving an improvement of 20% BLEU over the previous SOTA in direct speech-to-text translation. Compared to strong cascaded models, SeamlessM4T improves the quality of into-English translation by 1.3 BLEU points in speech-to-text and by 2.6 ASR-BLEU points in speech-to-speech. Tested for robustness, our system performs better against background noises and speaker variations in speech-to-text tasks
222_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#overview
.md
Tested for robustness, our system performs better against background noises and speaker variations in speech-to-text tasks compared to the current SOTA model. Critically, we evaluated SeamlessM4T on gender bias and added toxicity to assess translation safety. Finally, all contributions in this work are open-sourced and accessible at https://github.com/facebookresearch/seamless_communication*
222_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#usage
.md
First, load the processor and a checkpoint of the model: ```python >>> from transformers import AutoProcessor, SeamlessM4TModel
222_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#usage
.md
>>> processor = AutoProcessor.from_pretrained("facebook/hf-seamless-m4t-medium") >>> model = SeamlessM4TModel.from_pretrained("facebook/hf-seamless-m4t-medium") ``` You can seamlessly use this model on text or on audio, to generated either translated text or translated audio. Here is how to use the processor to process text and audio: ```python >>> # let's load an audio sample from an Arabic speech corpus >>> from datasets import load_dataset
222_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#usage
.md
```python >>> # let's load an audio sample from an Arabic speech corpus >>> from datasets import load_dataset >>> dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True) >>> audio_sample = next(iter(dataset))["audio"]
222_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#usage
.md
>>> # now, process it >>> audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt") >>> # now, process some English test as well >>> text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt") ```
222_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#speech
.md
[`SeamlessM4TModel`] can *seamlessly* generate text or speech with few or no changes. Let's target Russian voice translation: ```python >>> audio_array_from_text = model.generate(**text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() >>> audio_array_from_audio = model.generate(**audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() ``` With basically the same code, I've translated English text and Arabic speech to Russian speech samples.
222_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#text
.md
Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass `generate_speech=False` to [`SeamlessM4TModel.generate`]. This time, let's translate to French. ```python >>> # from audio >>> output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_audio = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True)
222_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#text
.md
>>> # from text >>> output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_text = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) ```
222_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#1-use-dedicated-models
.md
[`SeamlessM4TModel`] is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint. For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code: ```python >>> from transformers import SeamlessM4TForSpeechToSpeech
222_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#1-use-dedicated-models
.md
```python >>> from transformers import SeamlessM4TForSpeechToSpeech >>> model = SeamlessM4TForSpeechToSpeech.from_pretrained("facebook/hf-seamless-m4t-medium") ``` Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove `generate_speech=False`. ```python >>> from transformers import SeamlessM4TForTextToText >>> model = SeamlessM4TForTextToText.from_pretrained("facebook/hf-seamless-m4t-medium") ```
222_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#1-use-dedicated-models
.md
>>> model = SeamlessM4TForTextToText.from_pretrained("facebook/hf-seamless-m4t-medium") ``` Feel free to try out [`SeamlessM4TForSpeechToText`] and [`SeamlessM4TForTextToSpeech`] as well.
222_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#2-change-the-speaker-identity
.md
You have the possibility to change the speaker used for speech synthesis with the `spkr_id` argument. Some `spkr_id` works better than other for some languages!
222_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#3-change-the-generation-strategy
.md
You can use different [generation strategies](./generation_strategies) for speech and text generation, e.g `.generate(input_ids=input_ids, text_num_beams=4, speech_do_sample=True)` which will successively perform beam-search decoding on the text model, and multinomial sampling on the speech model.
222_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#4-generate-speech-and-text-at-the-same-time
.md
Use `return_intermediate_token_ids=True` with [`SeamlessM4TModel`] to return both speech and text !
222_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#model-architecture
.md
SeamlessM4T features a versatile architecture that smoothly handles the sequential generation of text and speech. This setup comprises two sequence-to-sequence (seq2seq) models. The first model translates the input modality into translated text, while the second model generates speech tokens, known as "unit tokens," from the translated text.
222_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#model-architecture
.md
Each modality has its own dedicated encoder with a unique architecture. Additionally, for speech output, a vocoder inspired by the [HiFi-GAN](https://arxiv.org/abs/2010.05646) architecture is placed on top of the second seq2seq model. Here's how the generation process works: - Input text or speech is processed through its specific encoder. - A decoder creates text tokens in the desired language.
222_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#model-architecture
.md
- Input text or speech is processed through its specific encoder. - A decoder creates text tokens in the desired language. - If speech generation is required, the second seq2seq model, following a standard encoder-decoder structure, generates unit tokens. - These unit tokens are then passed through the final vocoder to produce the actual speech.
222_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#model-architecture
.md
- These unit tokens are then passed through the final vocoder to produce the actual speech. This model was contributed by [ylacombe](https://huggingface.co/ylacombe). The original code can be found [here](https://github.com/facebookresearch/seamless_communication).
222_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tmodel
.md
The original SeamlessM4T Model transformer which can be used for every tasks available (S2ST, S2TT, T2TT, T2ST). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model.
222_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tmodel
.md
behavior. Parameters: config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. current_modality (`str`, *optional*, defaults to `"text"`): Default modality. Used to initialize the model. Methods: generate
222_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tfortexttospeech
.md
The text-to-speech SeamlessM4T Model transformer which can be used for T2ST. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model.
222_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tfortexttospeech
.md
behavior. Parameters: config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: generate
222_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tforspeechtospeech
.md
The speech-to-speech SeamlessM4T Model transformer which can be used for S2ST. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model.
222_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tforspeechtospeech
.md
behavior. Parameters: config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: generate
222_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tfortexttotext
.md
The text-to-text SeamlessM4T Model transformer which can be used for T2TT. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model.
222_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tfortexttotext
.md
behavior. Parameters: config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - generate
222_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tforspeechtotext
.md
The speech-to-text SeamlessM4T Model transformer which can be used for S2TT. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model.
222_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tforspeechtotext
.md
behavior. Parameters: config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - generate
222_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
This is the configuration class to store the configuration of a [`~SeamlessM4TModel`]. It is used to instantiate an SeamlessM4T model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SeamlessM4T ["facebook/hf-seamless-m4t-medium"](https://huggingface.co/"facebook/hf-seamless-m4t-medium") architecture.
222_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
["facebook/hf-seamless-m4t-medium"](https://huggingface.co/"facebook/hf-seamless-m4t-medium") architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 256102): Vocabulary size of the SeamlessM4T model. Defines the number of different tokens that can be represented by
222_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
Vocabulary size of the SeamlessM4T model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`~SeamlessM4TModel`], [`~SeamlessM4TForTextToSpeech`] or [`~SeamlessM4TForTextToText`]. t2u_vocab_size (`int`, *optional*, defaults to 10082): Unit vocabulary size of the SeamlessM4T model. Defines the number of different unit tokens that can be represented by the `inputs_ids` passed when calling the Text-To-Units sub-model of [`~SeamlessM4TModel`],
222_15_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
represented by the `inputs_ids` passed when calling the Text-To-Units sub-model of [`~SeamlessM4TModel`], [`~SeamlessM4TForSpeechToSpeech`] or [`~SeamlessM4TForTextToSpeech`]. > Parameters shared across sub-models hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the "intermediate" layers in the architecture. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
222_15_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). max_position_embeddings (`int`, *optional*, defaults to 1024):
222_15_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
max_position_embeddings (`int`, *optional*, defaults to 1024): The maximum sequence length that this model text encoder and decoder might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). is_encoder_decoder (`bool`, *optional*, defaults to `True`): Whether the model is used as an encoder/decoder or not. encoder_layerdrop (`float`, *optional*, defaults to 0.05):
222_15_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
Whether the model is used as an encoder/decoder or not. encoder_layerdrop (`float`, *optional*, defaults to 0.05): The LayerDrop probability for the encoders. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.05): The LayerDrop probability for the decoders. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. activation_function (`str` or `function`, *optional*, defaults to `"relu"`):
222_15_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
for more details. activation_function (`str` or `function`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the decoder and feed-forward layers. If string, `"gelu"`, `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, decoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.1):
222_15_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all attention layers. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for all activation layers in the model. scale_embedding (`bool`, *optional*, defaults to `True`): Scale embeddings by diving by sqrt(d_model). > Text encoder and text decoder specific parameters encoder_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer text encoder.
222_15_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
encoder_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer text encoder. encoder_ffn_dim (`int`, *optional*, defaults to 8192): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text encoder. encoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer text encoder. decoder_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer text decoder.
222_15_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
decoder_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer text decoder. decoder_ffn_dim (`int`, *optional*, defaults to 8192): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text decoder. decoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer text decoder. decoder_start_token_id (`int`, *optional*, defaults to 3):
222_15_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
decoder_start_token_id (`int`, *optional*, defaults to 3): If an encoder-decoder model starts decoding with a different token than _bos_, the id of that token. Only applied in the text decoder. max_new_tokens (`int`, *optional*, defaults to 256): The maximum numbers of text tokens to generate, ignoring the number of tokens in the prompt. pad_token_id (`int`, *optional*, defaults to 0): The id of the _padding_ text token. Only applied to the text-decoder model.
222_15_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
.md
pad_token_id (`int`, *optional*, defaults to 0): The id of the _padding_ text token. Only applied to the text-decoder model. bos_token_id (`int`, *optional*, defaults to 2): The id of the _beginning-of-stream_ text token. Only applied to the text-decoder model. eos_token_id (`int`, *optional*, defaults to 3): The id of the _end-of-stream_ text token. Only applied to the text-decoder model. > Speech encoder specific parameters speech_encoder_layers (`int`, *optional*, defaults to 24):
222_15_12