source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
.md
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. head_dim (`int`, *optional*, defaults to 256): The attention head dimension. hidden_activation (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
154_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
.md
The attention head dimension. hidden_activation (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`): The non-linear activation function (function or string) in the decoder. Will default to `"gelu_pytorch_tanh"` if not specified. `"gelu_pytorch_tanh"` uses an approximation of the `"gelu"` activation function. max_position_embeddings (`int`, *optional*, defaults to 8192): The maximum sequence length that this model might ever be used with.
154_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
.md
The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only
154_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
.md
Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*, defaults to 0): Padding token id. eos_token_id (`int`, *optional*, defaults to 1): End of stream token id. bos_token_id (`int`, *optional*, defaults to 2): Beginning of stream token id. tie_word_embeddings (`bool`, *optional*, defaults to `True`): Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 10000.0):
154_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
.md
Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`): Whether to use a bias in the query, key, value and output projection layers during self-attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
154_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. query_pre_attn_scalar (`float`, *optional*, defaults to 256): scaling factor used on the attention scores sliding_window (`int`, *optional*, defaults to 4096): in Gemma2, every other layer uses sliding window attention. This is the size of the sliding window. final_logit_softcapping (`float`, *optional*, defaults to 30.0): scaling factor when applying tanh softcapping on the logits.
154_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
.md
final_logit_softcapping (`float`, *optional*, defaults to 30.0): scaling factor when applying tanh softcapping on the logits. attn_logit_softcapping (`float`, *optional*, defaults to 50.0): scaling factor when applying tanh softcapping on the attention scores. cache_implementation (`str`, *optional*, defaults to `"hybrid"`): the cache type to be used with `generate`. ```python >>> from transformers import Gemma2Model, Gemma2Config >>> # Initializing a Gemma2 gemma2-7b style configuration
154_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
.md
```python >>> from transformers import Gemma2Model, Gemma2Config >>> # Initializing a Gemma2 gemma2-7b style configuration >>> configuration = Gemma2Config() >>> # Initializing a model from the gemma2-7b style configuration >>> model = Gemma2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
154_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2model
.md
The bare Gemma2 Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
154_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2model
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Gemma2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
154_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2model
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Gemma2DecoderLayer`] Args: config: Gemma2Config Methods: forward
154_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2forcausallm
.md
No docstring available for Gemma2ForCausalLM Methods: forward
154_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2forsequenceclassification
.md
The Gemma2 Model transformer with a sequence classification head on top (linear layer). [`Gemma2ForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
154_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2forsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
154_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2forsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
154_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2forsequenceclassification
.md
and behavior. Parameters: config ([`Gemma2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
154_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2fortokenclassification
.md
The Gemma2 Model transformer with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
154_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2fortokenclassification
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Gemma2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
154_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2fortokenclassification
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
154_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
155_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
155_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
The Moshi model was proposed in [Moshi: a speech-text foundation model for real-time dialogue](https://kyutai.org/Moshi.pdf) by Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave and Neil Zeghidour.
155_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
Moshi is a speech-text foundation model that casts spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. Moshi also predicts time-aligned text tokens as a prefix to audio tokens. This “Inner
155_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
of arbitrary conversational dynamics. Moshi also predicts time-aligned text tokens as a prefix to audio tokens. This “Inner Monologue” method significantly improves the linguistic quality of generated speech and provides streaming speech recognition and text-to-speech. As a result, Moshi is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice.
155_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
<div style="text-align: center"> <img src="https://huggingface.co/datasets/ylacombe/benchmark-comparison/resolve/main/moshi_architecture.png"> </div> The abstract from the paper is the following:
155_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
*We introduce Moshi, a speech-text foundation model and full-duplex spoken dialogue framework. Current systems for spoken dialogue rely on pipelines of independent components, namely voice activity detection, speech recognition, textual dialogue and text-to-speech. Such frameworks cannot emulate the experience of real conversations. First, their complexity induces a latency of several seconds between interactions. Second, text being the intermediate modality for dialogue, non-linguistic information that
155_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
seconds between interactions. Second, text being the intermediate modality for dialogue, non-linguistic information that modifies meaning— such as emotion or non-speech sounds— is lost in the interaction. Finally, they rely on a segmentation into speaker turns, which does not take into account overlapping speech, interruptions and interjections. Moshi solves these independent issues altogether by casting spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi
155_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
altogether by casting spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. We moreover extend the hierarchical semantic-to-acoustic token generation of previous work to first predict
155_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
dynamics. We moreover extend the hierarchical semantic-to-acoustic token generation of previous work to first predict time-aligned text tokens as a prefix to audio tokens. Not only this “Inner Monologue” method significantly improves the linguistic quality of generated speech, but we also illustrate how it can provide streaming speech recognition and text-to-speech. Our resulting model is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice,
155_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
model is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice, and is available at github.com/kyutai-labs/moshi.*
155_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
Moshi deals with 3 streams of information: 1. The user's audio 2. Moshi's audio 3. Moshi's textual output Similarly to [`~MusicgenModel`], audio is represented with audio codebooks, which can be interpreted like tokens. The main difference between text tokens and audio codebooks is that audio codebooks introduce an additional dimension of information. Text tokens are typically of dim `(batch_size, sequence_length)` but audio tokens are of dim `(batch_size, num_codebooks, sequence_length)`.
155_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
Moshi's made of 3 components: **1. The main decoder (Helium in the paper)** It corresponds to [`MoshiForCausalLM`]. It is strictly a classic text LLM, that uses an architecture similar to [` ~GemmaForCausalLM`]. In other words, it takes text tokens, embeds them, pass them through the decoder and a language head, to get text logits. **2. The depth decoder** On its own, it's also a classic LLM, but this time, instead of generating over the time dimension, it generates over the codebook dimension.
155_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#overview
.md
It also means that its context length is `num_codebooks`, thus it can't generate more than `num_codebooks`. Note that each timestamp - i.e each codebook - gets its own set of Linear Layers and Embeddings. **3. [`MimiModel`]** It's the audio encoder from Kyutai, that has recently been integrated to transformers, which is used to "tokenize" audio. It has the same use that [`~EncodecModel`] has in [`~MusicgenModel`].
155_1_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#tips
.md
The original checkpoints can be converted using the conversion script `src/transformers/models/moshi/convert_moshi_transformers.py`
155_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
This implementation has two main aims: 1. quickly test model generation by simplifying the original API 2. simplify training. A training guide will come soon, but user contributions are welcomed! <Tip> It is designed for intermediate use. We strongly recommend using the original [implementation](https://github.com/kyutai-labs/moshi) to infer the model in real-time streaming. </Tip> **1. Model generation**
155_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
</Tip> **1. Model generation** Moshi is a streaming auto-regressive model with two streams of audio. To put it differently, one audio stream corresponds to what the model said/will say and the other audio stream corresponds to what the user said/will say. [`MoshiForConditionalGeneration.generate`] thus needs 3 inputs: 1. `input_ids` - corresponding to the text token history 2. `moshi_input_values` or `moshi_audio_codes`- corresponding to the model audio history
155_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
2. `moshi_input_values` or `moshi_audio_codes`- corresponding to the model audio history 3. `user_input_values` or `user_audio_codes` - corresponding to the user audio history These three inputs must be synchronized. Meaning that their lengths must correspond to the same number of tokens. You can dynamically use the 3 inputs depending on what you want to test:
155_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
You can dynamically use the 3 inputs depending on what you want to test: 1. Simply check the model response to an user prompt - in that case, `input_ids` can be filled with pad tokens and `user_input_values` can be a zero tensor of the same shape than the user prompt. 2. Test more complex behaviour - in that case, you must be careful about how the input tokens are synchronized with the audios. <Tip>
155_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
<Tip> The original model is synchronized text with audio by padding the text in between each token enunciation. To follow the example of the following image, `"Hello, I'm Moshi"` could be transformed to `"Hello,<pad><unk>I'm Moshi"`. </Tip> <div style="text-align: center"> <img src="https://huggingface.co/datasets/ylacombe/benchmark-comparison/resolve/main/moshi_text_sync.png"> </div>
155_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
<img src="https://huggingface.co/datasets/ylacombe/benchmark-comparison/resolve/main/moshi_text_sync.png"> </div> [`MoshiForConditionalGeneration.generate`] then auto-regressively feeds to itself its own audio stream, but since it doesn't have access to the user input stream while using `transformers`, it will thus **assume that the user is producing blank audio**. ```python >>> from datasets import load_dataset, Audio >>> import torch, math
155_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
```python >>> from datasets import load_dataset, Audio >>> import torch, math >>> from transformers import MoshiForConditionalGeneration, AutoFeatureExtractor, AutoTokenizer >>> librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
155_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
>>> # prepare user input audio >>> librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate)) >>> audio_sample = librispeech_dummy[-1]["audio"]["array"] >>> user_input_values = feature_extractor(raw_audio=audio_sample, sampling_rate=feature_extractor.sampling_rate, return_tensors="pt").to(device=device, dtype=dtype)
155_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
>>> # prepare moshi input values - we suppose moshi didn't say anything while the user spoke >>> moshi_input_values = torch.zeros_like(user_input_values.input_values) >>> # prepare moshi input ids - we suppose moshi didn't say anything while the user spoke >>> num_tokens = math.ceil(moshi_input_values.shape[-1] * waveform_to_token_ratio) >>> input_ids = torch.ones((1, num_tokens), device=device, dtype=torch.int64) * tokenizer.encode("<pad>")[0]
155_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
>>> # generate 25 new tokens (around 2s of audio) >>> output = model.generate(input_ids=input_ids, user_input_values=user_input_values.input_values, moshi_input_values=moshi_input_values, max_new_tokens=25)
155_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-to-use-the-model
.md
>>> text_tokens = output.sequences >>> audio_waveforms = output.audio_sequences ``` **2. Model training** Most of the work has to be done during data creation/pre-processing, because of the need to align/synchronize streams. Once it's done, you can simply forward `text_labels` and `audio_labels` to [`MoshiForConditionalGeneration.forward`], alongside the usual inputs, to get the model loss. A training guide will come soon, but user contributions are welcomed!
155_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-does-the-model-forward-the-inputs--generate
.md
1. The input streams are embedded and combined into `inputs_embeds`. 2. `inputs_embeds` is passed through the main decoder, which processes it like a normal LLM would. 3. The main decoder outputs `text logits` but also its `last hidden state` which is called `temporal context` in the paper.
155_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-does-the-model-forward-the-inputs--generate
.md
3. The main decoder outputs `text logits` but also its `last hidden state` which is called `temporal context` in the paper. 3. The depth decoder switches the dimension on which we forward / generate (codebooks instead of time). It uses the token generated from `text logits` and the `temporal context` to auto-regressively generate audio codebooks. This model was contributed by [Yoach Lacombe (ylacombe)](https://huggingface.co/ylacombe).
155_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#how-does-the-model-forward-the-inputs--generate
.md
This model was contributed by [Yoach Lacombe (ylacombe)](https://huggingface.co/ylacombe). The original code can be found [here](https://github.com/kyutai-labs/moshi).
155_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
This is the configuration class to store the configuration of a [`MoshiModel`]. It is used to instantiate a Moshi model according to the specified arguments, defining the audio encoder, Moshi depth decoder and Moshi decoder configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the Moshiko model, e.g. [kmhf/hf-moshiko](https://huggingface.co/kmhf/hf-moshiko)
155_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
e.g. [kmhf/hf-moshiko](https://huggingface.co/kmhf/hf-moshiko) Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the MoshiDecoder model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MoshiDecoder`]. hidden_size (`int`, *optional*, defaults to 4096):
155_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
represented by the `inputs_ids` passed when calling [`MoshiDecoder`]. hidden_size (`int`, *optional*, defaults to 4096): Dimensionality of the layers and the pooler layer of the main decoder. num_hidden_layers (`int`, *optional*, defaults to 32): Number of decoder layers. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the main decoder block. num_key_value_heads (`int`, *optional*):
155_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
Number of attention heads for each attention layer in the main decoder block. num_key_value_heads (`int`, *optional*): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
155_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. audio_vocab_size (`int`, *optional*):
155_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
audio_vocab_size (`int`, *optional*): Vocabulary size of the audio part of model. Defines the number of different tokens that can be represented by the `audio_codes` passed when calling the Moshi models. max_position_embeddings (`int`, *optional*, defaults to 3000): The maximum sequence length that this model might ever be used with. Typically, set this to something large just in case (e.g., 512 or 1024 or 2048). rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings.
155_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. head_dim (`int`, *optional*, defaults to `hidden_size // num_attention_heads`): The attention head dimension. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
155_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. sliding_window (`int`, *optional*, defaults to 3000): Sliding window attention window size. If not specified, will default to `3000`. attention_dropout (`float`, *optional*, defaults to 0.0):
155_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. ffn_dim (`int`, *optional*, defaults to 22528): Dimensionality of the "intermediate" (often named feed-forward) layer in the main decoder block. Must be even. rms_norm_eps (`float`, *optional*, defaults to 1e-08): The epsilon used by the rms normalization layers. num_codebooks (`int`, *optional*, defaults to 8): The number of audio codebooks for each audio channels.
155_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
num_codebooks (`int`, *optional*, defaults to 8): The number of audio codebooks for each audio channels. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether to tie weight embeddings kwargs (*optional*): Dictionary of keyword arguments. Notably: - **audio_encoder_config** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the audio encoder config. - **depth__config** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that
155_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
- **depth__config** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the depth decoder config. Example: ```python >>> from transformers import ( ... MoshiConfig, ... MoshiForConditionalGeneration, ... )
155_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
>>> configuration = MoshiConfig() >>> # Initializing a MoshiForConditionalGeneration (with random weights) from the kmhf/hf-moshiko style configuration >>> model = MoshiForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config >>> # Saving the model, including its configuration >>> model.save_pretrained("kmhf/hf-moshiko")
155_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiconfig
.md
>>> # Saving the model, including its configuration >>> model.save_pretrained("kmhf/hf-moshiko") >>> # loading model and config from pretrained folder >>> moshi_config = MoshiConfig.from_pretrained("kmhf/hf-moshiko") >>> model = MoshiForConditionalGeneration.from_pretrained("kmhf/hf-moshiko", config=moshi_config) ```
155_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
This is the configuration class to store the configuration of a [`MoshiDepthDecoder`]. It is used to instantiate a Moshi depth decoder model according to the specified arguments, defining the Moshi depth decoder config. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32000):
155_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the MoshiDepthDecoder model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MoshiDepthDecoder`]. hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the layers and the pooler layer of the depth decoder. input_size (`int`, *optional*, defaults to 4096):
155_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
Dimensionality of the layers and the pooler layer of the depth decoder. input_size (`int`, *optional*, defaults to 4096): Dimensionality of the input hidden states. Used to connect the main decoder to the depth decoder. num_hidden_layers (`int`, *optional*, defaults to 6): Number of depth decoder layers. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the depth decoder block. num_key_value_heads (`int`, *optional*):
155_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
Number of attention heads for each attention layer in the depth decoder block. num_key_value_heads (`int`, *optional*): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
155_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. audio_vocab_size (`int`, *optional*, defaults to 2048):
155_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
audio_vocab_size (`int`, *optional*, defaults to 2048): Vocabulary size of the audio part of model. Defines the number of different tokens that can be represented by the `audio_codes` passed when calling the Moshi models. max_position_embeddings (`int`, *optional*, defaults to 9): The maximum sequence length that this model might ever be used with. Typically, set this to something large just in case (e.g., 512 or 1024 or 2048). hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
155_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
just in case (e.g., 512 or 1024 or 2048). hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the depth decoder. head_dim (`int`, *optional*, defaults to `hidden_size // num_attention_heads`): The attention head dimension. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (`bool`, *optional*, defaults to `True`):
155_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. sliding_window (`int`, *optional*, defaults to 8): Sliding window attention window size. If not specified, will default to `8`. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. ffn_dim (`int`, *optional*, defaults to 5632):
155_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
The dropout ratio for the attention probabilities. ffn_dim (`int`, *optional*, defaults to 5632): Dimensionality of the "intermediate" (often named feed-forward) layer in the depth decoder block. Must be even. rms_norm_eps (`float`, *optional*, defaults to 1e-08): The epsilon used by the rms normalization layers. num_codebooks (`int`, *optional*, defaults to 8): The number of audio codebooks for each audio channels. tie_word_embeddings (`bool`, *optional*, defaults to `False`):
155_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
The number of audio codebooks for each audio channels. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether to tie weight embeddings kwargs (*optional*): Dictionary of keyword arguments. Notably: - **audio_encoder_config** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the audio encoder config. Example: ```python >>> from transformers import ( ... MoshiDepthConfig, ... MoshiDepthDecoder, ... )
155_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshidepthconfig
.md
>>> configuration = MoshiDepthConfig() >>> # Initializing a MoshiDepthDecoder (with random weights) from the kmhf/hf-moshiko style configuration >>> model = MoshiDepthDecoder(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
155_6_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshimodel
.md
The bare Moshi Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
155_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshimodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MoshiConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
155_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshimodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`MoshiDecoderLayer`] Args: config: MoshiConfig Methods: forward
155_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiforcausallm
.md
The Moshi decoder model with a text language modelling head on top. Only usable for text. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
155_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiforcausallm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MoshiConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
155_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiforcausallm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
155_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiforconditionalgeneration
.md
The original Moshi model with an audio encoder, a Moshi depth decoder and a Moshi decoder, for speech-to-speech. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
155_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiforconditionalgeneration
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MoshiConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
155_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moshi.md
https://huggingface.co/docs/transformers/en/model_doc/moshi/#moshiforconditionalgeneration
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - generate - get_unconditional_inputs
155_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
156_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
156_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer
.md
<Tip> This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight breaking changes to fix it in the future. If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title). </Tip>
156_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#overview
.md
The MaskFormer model was proposed in [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. MaskFormer addresses semantic segmentation with a mask classification paradigm instead of performing classic pixel-level classification. The abstract from the paper is the following:
156_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#overview
.md
*Modern approaches typically formulate semantic segmentation as a per-pixel classification task, while instance-level segmentation is handled with an alternative mask classification. Our key insight: mask classification is sufficiently general to solve both semantic- and instance-level segmentation tasks in a unified manner using the exact same model, loss, and training procedure. Following this observation, we propose MaskFormer, a simple mask classification model which predicts a set of binary masks,
156_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#overview
.md
Following this observation, we propose MaskFormer, a simple mask classification model which predicts a set of binary masks, each associated with a single global class label prediction. Overall, the proposed mask classification-based method simplifies the landscape of effective approaches to semantic and panoptic segmentation tasks and shows excellent empirical results. In particular, we observe that MaskFormer outperforms per-pixel classification baselines when the number of classes is large. Our mask
156_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#overview
.md
we observe that MaskFormer outperforms per-pixel classification baselines when the number of classes is large. Our mask classification-based method outperforms both current state-of-the-art semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models.*
156_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#overview
.md
The figure below illustrates the architecture of MaskFormer. Taken from the [original paper](https://arxiv.org/abs/2107.06278). <img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/maskformer_architecture.png"/> This model was contributed by [francesco](https://huggingface.co/francesco). The original code can be found [here](https://github.com/facebookresearch/MaskFormer).
156_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#usage-tips
.md
- MaskFormer's Transformer decoder is identical to the decoder of [DETR](detr). During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter `use_auxiliary_loss` of [`MaskFormerConfig`] to `True`, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters).
156_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#usage-tips
.md
- If you want to train the model in a distributed environment across multiple nodes, then one should update the `get_num_masks` function inside in the `MaskFormerLoss` class of `modeling_maskformer.py`. When training on multiple nodes, this should be set to the average number of target masks across all nodes, as can be seen in the original implementation [here](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
156_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#usage-tips
.md
- One can use [`MaskFormerImageProcessor`] to prepare images for the model and optional targets for the model.
156_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#usage-tips
.md
- To get the final segmentation, depending on the task, you can call [`~MaskFormerImageProcessor.post_process_semantic_segmentation`] or [`~MaskFormerImageProcessor.post_process_panoptic_segmentation`]. Both tasks can be solved using [`MaskFormerForInstanceSegmentation`] output, panoptic segmentation accepts an optional `label_ids_to_fuse` argument to fuse instances of the target object/s (e.g. sky) together.
156_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#resources
.md
<PipelineTag pipeline="image-segmentation"/> - All notebooks that illustrate inference as well as fine-tuning on custom data with MaskFormer can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/MaskFormer). - Scripts for finetuning [`MaskFormer`] with [`Trainer`] or [Accelerate](https://huggingface.co/docs/accelerate/index) can be found [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/instance-segmentation).
156_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
models.maskformer.modeling_maskformer.MaskFormerModelOutput Class for outputs of [`MaskFormerModel`]. This class returns all the needed hidden states to compute the logits. Args: encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): Last hidden states (final feature map) of the last stage of the encoder model (backbone). pixel_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
156_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
pixel_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN). transformer_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Last hidden states (final feature map) of the last stage of the transformer decoder model.
156_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
Last hidden states (final feature map) of the last stage of the transformer decoder model. encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the encoder model at the output of each stage.
156_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
model at the output of each stage. pixel_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the pixel decoder model at the output of each stage.
156_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
decoder model at the output of each stage. transformer_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called feature maps) of the transformer decoder at the output of each stage.
156_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
transformer decoder at the output of each stage. hidden_states `tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` containing `encoder_hidden_states`, `pixel_decoder_hidden_states` and `decoder_hidden_states` attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
156_5_5