source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxmodel
.md
The bare JUKEBOX Model used for music generation. 4 sampling techniques are supported : `primed_sample`, `upsample`, `continue_sample` and `ancestral_sample`. It does not have a `forward` method as the training is not end to end. If you want to fine-tune the model, it is recommended to use the `JukeboxPrior` class and train each prior individually. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
404_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxmodel
.md
individually. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
404_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxmodel
.md
and behavior. Parameters: config (`JukeboxConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: ancestral_sample - primed_sample - continue_sample - upsample - _sample
404_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxprior
.md
The JukeboxPrior class, which is a wrapper around the various conditioning and the transformer. JukeboxPrior can be seen as language models trained on music. They model the next `music token` prediction task. If a (lyric) `encoderù is defined, it also models the `next character` prediction on the lyrics. Can be conditionned on timing, artist, genre, lyrics and codes from lower-levels Priors. Args: config (`JukeboxPriorConfig`):
404_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxprior
.md
genre, lyrics and codes from lower-levels Priors. Args: config (`JukeboxPriorConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. level (`int`, *optional*): Current level of the Prior. Should be in range `[0,nb_priors]`. nb_priors (`int`, *optional*, defaults to 3): Total number of priors.
404_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxprior
.md
nb_priors (`int`, *optional*, defaults to 3): Total number of priors. vqvae_encoder (`Callable`, *optional*): Encoding method of the VQVAE encoder used in the forward pass of the model. Passing functions instead of the vqvae module to avoid getting the parameters. vqvae_decoder (`Callable`, *optional*): Decoding method of the VQVAE decoder used in the forward pass of the model. Passing functions instead of the vqvae module to avoid getting the parameters. Methods: sample - forward
404_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvae
.md
The Hierarchical VQ-VAE model used in Jukebox. This model follows the Hierarchical VQVAE paper from [Will Williams, Sam Ringer, Tom Ash, John Hughes, David MacLeod, Jamie Dougherty](https://arxiv.org/abs/2002.08111). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
404_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvae
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config (`JukeboxConfig`): Model configuration class with all the parameters of the model.
404_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvae
.md
and behavior. Parameters: config (`JukeboxConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - encode - decode
404_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
405_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
405_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#overview
.md
The MusicGen model was proposed in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez. MusicGen is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a
405_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#overview
.md
on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or *audio codes*, conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec, to recover the audio waveform.
405_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#overview
.md
to recover the audio waveform. Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g. hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass. The abstract from the paper is the following:
405_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#overview
.md
The abstract from the paper is the following: *We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for
405_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#overview
.md
of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human
405_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#overview
.md
controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.* This model was contributed by [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original code can be found
405_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#overview
.md
This model was contributed by [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original code can be found [here](https://github.com/facebookresearch/audiocraft). The pre-trained checkpoints can be found on the [Hugging Face Hub](https://huggingface.co/models?sort=downloads&search=facebook%2Fmusicgen-).
405_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#usage-tips
.md
- After downloading the original checkpoints from [here](https://github.com/facebookresearch/audiocraft/blob/main/docs/MUSICGEN.md#importing--exporting-models) , you can convert them using the **conversion script** available at `src/transformers/models/musicgen/convert_musicgen_transformers.py` with the following command: ```bash python src/transformers/models/musicgen/convert_musicgen_transformers.py \ --checkpoint small --pytorch_dump_folder /output/path --safe_serialization ```
405_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#generation
.md
MusicGen is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default, and can be explicitly specified by setting `do_sample=True` in the call to [`MusicgenForConditionalGeneration.generate`], or by overriding the model's generation config (see below).
405_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#generation
.md
or by overriding the model's generation config (see below). Generation is limited by the sinusoidal positional embeddings to 30 second inputs. Meaning, MusicGen cannot generate more than 30 seconds of audio (1503 tokens), and input audio passed by Audio-Prompted Generation contributes to this limit so, given an input of 20 seconds of audio, MusicGen cannot generate more than 10 seconds of additional audio.
405_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#generation
.md
given an input of 20 seconds of audio, MusicGen cannot generate more than 10 seconds of additional audio. Transformers supports both mono (1-channel) and stereo (2-channel) variants of MusicGen. The mono channel versions generate a single set of codebooks. The stereo versions generate 2 sets of codebooks, 1 for each channel (left/right), and each set of codebooks is decoded independently through the audio compression model. The audio streams for each channel are combined to give the final stereo output.
405_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#unconditional-generation
.md
The inputs for unconditional (or 'null') generation can be obtained through the method [`MusicgenForConditionalGeneration.get_unconditional_inputs`]: ```python >>> from transformers import MusicgenForConditionalGeneration >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") >>> unconditional_inputs = model.get_unconditional_inputs(num_samples=1)
405_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#unconditional-generation
.md
>>> audio_values = model.generate(**unconditional_inputs, do_sample=True, max_new_tokens=256) ``` The audio outputs are a three-dimensional Torch tensor of shape `(batch_size, num_channels, sequence_length)`. To listen to the generated audio samples, you can either play them in an ipynb notebook: ```python from IPython.display import Audio
405_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#unconditional-generation
.md
sampling_rate = model.config.audio_encoder.sampling_rate Audio(audio_values[0].numpy(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `scipy`: ```python >>> import scipy >>> sampling_rate = model.config.audio_encoder.sampling_rate >>> scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy()) ```
405_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#text-conditional-generation
.md
The model can generate an audio sample conditioned on a text prompt through use of the [`MusicgenProcessor`] to pre-process the inputs: ```python >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small") >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
405_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#text-conditional-generation
.md
>>> inputs = processor( ... text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` The `guidance_scale` is used in classifier free guidance (CFG), setting the weighting between the conditional logits
405_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#text-conditional-generation
.md
``` The `guidance_scale` is used in classifier free guidance (CFG), setting the weighting between the conditional logits (which are predicted from the text prompts) and the unconditional logits (which are predicted from an unconditional or 'null' prompt). Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer audio quality. CFG is enabled by setting `guidance_scale > 1`. For best results,
405_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#text-conditional-generation
.md
prompt, usually at the expense of poorer audio quality. CFG is enabled by setting `guidance_scale > 1`. For best results, use `guidance_scale=3` (default).
405_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#audio-prompted-generation
.md
The same [`MusicgenProcessor`] can be used to pre-process an audio prompt that is used for audio continuation. In the following example, we load an audio file using the 🤗 Datasets library, which can be pip installed through the command below: ```bash pip install --upgrade pip pip install datasets[audio] ``` ```python >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> from datasets import load_dataset
405_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#audio-prompted-generation
.md
>>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small") >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") >>> dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True) >>> sample = next(iter(dataset))["audio"] >>> # take the first half of the audio sample >>> sample["array"] = sample["array"][: len(sample["array"]) // 2]
405_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#audio-prompted-generation
.md
>>> inputs = processor( ... audio=sample["array"], ... sampling_rate=sample["sampling_rate"], ... text=["80s blues track with groovy saxophone"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` For batched audio-prompted generation, the generated `audio_values` can be post-processed to remove padding by using the [`MusicgenProcessor`] class: ```python
405_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#audio-prompted-generation
.md
[`MusicgenProcessor`] class: ```python >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> from datasets import load_dataset
405_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#audio-prompted-generation
.md
>>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small") >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") >>> dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True) >>> sample = next(iter(dataset))["audio"] >>> # take the first quarter of the audio sample >>> sample_1 = sample["array"][: len(sample["array"]) // 4] >>> # take the first half of the audio sample >>> sample_2 = sample["array"][: len(sample["array"]) // 2]
405_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#audio-prompted-generation
.md
>>> # take the first half of the audio sample >>> sample_2 = sample["array"][: len(sample["array"]) // 2] >>> inputs = processor( ... audio=[sample_1, sample_2], ... sampling_rate=sample["sampling_rate"], ... text=["80s blues track with groovy saxophone", "90s rock song with loud guitars and heavy drums"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
405_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#audio-prompted-generation
.md
>>> # post-process to remove padding from the batched audio >>> audio_values = processor.batch_decode(audio_values, padding_mask=inputs.padding_mask) ```
405_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#generation-configuration
.md
The default parameters that control the generation process, such as sampling, guidance scale and number of generated tokens, can be found in the model's generation config, and updated as desired: ```python >>> from transformers import MusicgenForConditionalGeneration >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") >>> # inspect the default generation config >>> model.generation_config
405_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#generation-configuration
.md
>>> # inspect the default generation config >>> model.generation_config >>> # increase the guidance scale to 4.0 >>> model.generation_config.guidance_scale = 4.0 >>> # decrease the max length to 256 tokens >>> model.generation_config.max_length = 256 ``` Note that any arguments passed to the generate method will **supersede** those in the generation config, so setting `do_sample=False` in the call to generate will supersede the setting of `model.generation_config.do_sample` in the generation config.
405_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#model-structure
.md
The MusicGen model can be de-composed into three distinct stages: 1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5 2. MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations
405_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#model-structure
.md
3. Audio encoder/decoder: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder Thus, the MusicGen model can either be used as a standalone decoder model, corresponding to the class [`MusicgenForCausalLM`], or as a composite model that includes the text encoder and audio encoder/decoder, corresponding to the class
405_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#model-structure
.md
or as a composite model that includes the text encoder and audio encoder/decoder, corresponding to the class [`MusicgenForConditionalGeneration`]. If only the decoder needs to be loaded from the pre-trained checkpoint, it can be loaded by first specifying the correct config, or be accessed through the `.decoder` attribute of the composite model: ```python >>> from transformers import AutoConfig, MusicgenForCausalLM, MusicgenForConditionalGeneration
405_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#model-structure
.md
>>> # Option 1: get decoder config and pass to `.from_pretrained` >>> decoder_config = AutoConfig.from_pretrained("facebook/musicgen-small").decoder >>> decoder = MusicgenForCausalLM.from_pretrained("facebook/musicgen-small", **decoder_config)
405_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#model-structure
.md
>>> # Option 2: load the entire composite model, but only return the decoder >>> decoder = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small").decoder ``` Since the text encoder and audio encoder/decoder models are frozen during training, the MusicGen decoder [`MusicgenForCausalLM`] can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can
405_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#model-structure
.md
can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can be combined with the frozen text encoder and audio encoder/decoders to recover the composite [`MusicgenForConditionalGeneration`] model. Tips: * MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model.
405_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#model-structure
.md
* MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model. * Sampling mode tends to deliver better results than greedy - you can toggle sampling with the variable `do_sample` in the call to [`MusicgenForConditionalGeneration.generate`]
405_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgendecoderconfig
.md
This is the configuration class to store the configuration of an [`MusicgenDecoder`]. It is used to instantiate a MusicGen decoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MusicGen [facebook/musicgen-small](https://huggingface.co/facebook/musicgen-small) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
405_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgendecoderconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 2048): Vocabulary size of the MusicgenDecoder model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MusicgenDecoder`]. hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the layers and the pooler layer.
405_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgendecoderconfig
.md
hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 24): Number of decoder layers. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer block. ffn_dim (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer block.
405_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgendecoderconfig
.md
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer block. activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the decoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, text_encoder, and pooler.
405_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgendecoderconfig
.md
The dropout probability for all fully connected layers in the embeddings, text_encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. max_position_embeddings (`int`, *optional*, defaults to 2048): The maximum sequence length that this model might ever be used with. Typically, set this to something large
405_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgendecoderconfig
.md
The maximum sequence length that this model might ever be used with. Typically, set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_factor (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details.
405_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgendecoderconfig
.md
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. scale_embedding (`bool`, *optional*, defaults to `False`): Scale embeddings by diving by sqrt(hidden_size). use_cache (`bool`, *optional*, defaults to `True`): Whether the model should return the last key/values attentions (not used by all models) num_codebooks (`int`, *optional*, defaults to 4): The number of parallel codebooks forwarded to the model.
405_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgendecoderconfig
.md
num_codebooks (`int`, *optional*, defaults to 4): The number of parallel codebooks forwarded to the model. tie_word_embeddings(`bool`, *optional*, defaults to `False`): Whether input and output word embeddings should be tied. audio_channels (`int`, *optional*, defaults to 1 Number of channels in the audio data. Either 1 for mono or 2 for stereo. Stereo models generate a separate audio stream for the left/right output channels. Mono models generate a single audio stream output.
405_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenconfig
.md
This is the configuration class to store the configuration of a [`MusicgenModel`]. It is used to instantiate a MusicGen model according to the specified arguments, defining the text encoder, audio encoder and MusicGen decoder configs. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: kwargs (*optional*): Dictionary of keyword arguments. Notably:
405_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenconfig
.md
Args: kwargs (*optional*): Dictionary of keyword arguments. Notably: - **text_encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the text encoder config. - **audio_encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the audio encoder config. - **decoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the decoder config. Example: ```python
405_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenconfig
.md
the decoder config. Example: ```python >>> from transformers import ( ... MusicgenConfig, ... MusicgenDecoderConfig, ... T5Config, ... EncodecConfig, ... MusicgenForConditionalGeneration, ... )
405_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenconfig
.md
>>> # Initializing text encoder, audio encoder, and decoder model configurations >>> text_encoder_config = T5Config() >>> audio_encoder_config = EncodecConfig() >>> decoder_config = MusicgenDecoderConfig() >>> configuration = MusicgenConfig.from_sub_models_config( ... text_encoder_config, audio_encoder_config, decoder_config ... )
405_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenconfig
.md
>>> # Initializing a MusicgenForConditionalGeneration (with random weights) from the facebook/musicgen-small style configuration >>> model = MusicgenForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config >>> config_text_encoder = model.config.text_encoder >>> config_audio_encoder = model.config.audio_encoder >>> config_decoder = model.config.decoder >>> # Saving the model, including its configuration >>> model.save_pretrained("musicgen-model")
405_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenconfig
.md
>>> # Saving the model, including its configuration >>> model.save_pretrained("musicgen-model") >>> # loading model and config from pretrained folder >>> musicgen_config = MusicgenConfig.from_pretrained("musicgen-model") >>> model = MusicgenForConditionalGeneration.from_pretrained("musicgen-model", config=musicgen_config) ```
405_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenprocessor
.md
Constructs a MusicGen processor which wraps an EnCodec feature extractor and a T5 tokenizer into a single processor class. [`MusicgenProcessor`] offers all the functionalities of [`EncodecFeatureExtractor`] and [`TTokenizer`]. See [`~MusicgenProcessor.__call__`] and [`~MusicgenProcessor.decode`] for more information. Args: feature_extractor (`EncodecFeatureExtractor`): An instance of [`EncodecFeatureExtractor`]. The feature extractor is a required input. tokenizer (`T5Tokenizer`):
405_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenprocessor
.md
An instance of [`EncodecFeatureExtractor`]. The feature extractor is a required input. tokenizer (`T5Tokenizer`): An instance of [`T5Tokenizer`]. The tokenizer is a required input.
405_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenmodel
.md
The bare Musicgen decoder model outputting raw hidden-states without any specific head on top. The Musicgen model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an encoder decoder transformer trained on the task of conditional music generation
405_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenmodel
.md
encoder decoder transformer trained on the task of conditional music generation This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
405_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MusicgenConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
405_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
405_12_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenforcausallm
.md
The MusicGen decoder model with a language modelling head on top. The Musicgen model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an encoder decoder transformer trained on the task of conditional music generation This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
405_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenforcausallm
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
405_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenforcausallm
.md
and behavior. Parameters: config ([`MusicgenConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
405_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenforconditionalgeneration
.md
The composite MusicGen model with a text encoder, audio encoder and Musicgen decoder, for music generation tasks with one or both of text and audio prompts. The Musicgen model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an encoder decoder transformer trained on the task of conditional music generation
405_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenforconditionalgeneration
.md
encoder decoder transformer trained on the task of conditional music generation This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
405_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenforconditionalgeneration
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MusicgenConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
405_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenforconditionalgeneration
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
405_14_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
406_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
406_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#overview
.md
The Swin Transformer was proposed in [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. The abstract from the paper is the following: *This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone
406_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#overview
.md
*This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \bold{S}hifted
406_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#overview
.md
To address these differences, we propose a hierarchical Transformer whose representation is computed with \bold{S}hifted \bold{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it
406_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#overview
.md
various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and
406_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#overview
.md
(53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png" alt="drawing" width="600"/>
406_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#overview
.md
alt="drawing" width="600"/> <small> Swin Transformer architecture. Taken from the <a href="https://arxiv.org/abs/2102.03334">original paper</a>.</small> This model was contributed by [novice03](https://huggingface.co/novice03). The Tensorflow version of this model was contributed by [amyeroberts](https://huggingface.co/amyeroberts). The original code can be found [here](https://github.com/microsoft/Swin-Transformer).
406_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#usage-tips
.md
- Swin pads the inputs supporting any input height and width (if divisible by `32`). - Swin can be used as a *backbone*. When `output_hidden_states = True`, it will output both `hidden_states` and `reshaped_hidden_states`. The `reshaped_hidden_states` have a shape of `(batch, num_channels, height, width)` rather than `(batch_size, sequence_length, num_channels)`.
406_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer. <PipelineTag pipeline="image-classification"/> - [`SwinForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
406_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#resources
.md
- See also: [Image classification task guide](../tasks/image_classification) Besides that: - [`SwinForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
406_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
This is the configuration class to store the configuration of a [`SwinModel`]. It is used to instantiate a Swin model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Swin [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) architecture.
406_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
[microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 4): The size (resolution) of each patch.
406_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 4): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. embed_dim (`int`, *optional*, defaults to 96): Dimensionality of patch embedding. depths (`list(int)`, *optional*, defaults to `[2, 2, 6, 2]`): Depth of each layer in the Transformer encoder. num_heads (`list(int)`, *optional*, defaults to `[3, 6, 12, 24]`):
406_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
Depth of each layer in the Transformer encoder. num_heads (`list(int)`, *optional*, defaults to `[3, 6, 12, 24]`): Number of attention heads in each layer of the Transformer encoder. window_size (`int`, *optional*, defaults to 7): Size of windows. mlp_ratio (`float`, *optional*, defaults to 4.0): Ratio of MLP hidden dimensionality to embedding dimensionality. qkv_bias (`bool`, *optional*, defaults to `True`): Whether or not a learnable bias should be added to the queries, keys and values.
406_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
Whether or not a learnable bias should be added to the queries, keys and values. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings and encoder. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. drop_path_rate (`float`, *optional*, defaults to 0.1): Stochastic depth rate. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
406_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
Stochastic depth rate. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. use_absolute_embeddings (`bool`, *optional*, defaults to `False`): Whether or not to add absolute position embeddings to the patch embeddings. initializer_range (`float`, *optional*, defaults to 0.02):
406_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. encoder_stride (`int`, *optional*, defaults to 32): Factor to increase the spatial resolution by in the decoder head for masked image modeling. out_features (`List[str]`, *optional*):
406_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*):
406_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. Example: ```python
406_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
same order as defined in the `stage_names` attribute. Example: ```python >>> from transformers import SwinConfig, SwinModel
406_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
.md
>>> # Initializing a Swin microsoft/swin-tiny-patch4-window7-224 style configuration >>> configuration = SwinConfig() >>> # Initializing a model (with random weights) from the microsoft/swin-tiny-patch4-window7-224 style configuration >>> model = SwinModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
406_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinmodel
.md
The bare Swin Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwinConfig`]): Model configuration class with all the parameters of the model.
406_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinmodel
.md
behavior. Parameters: config ([`SwinConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. add_pooling_layer (`bool`, *optional*, defaults to `True`): Whether or not to apply pooling layer. use_mask_token (`bool`, *optional*, defaults to `False`):
406_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinmodel
.md
Whether or not to apply pooling layer. use_mask_token (`bool`, *optional*, defaults to `False`): Whether or not to create and apply mask tokens in the embedding layer. Methods: forward
406_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinformaskedimagemodeling
.md
Swin Model with a decoder on top for masked image modeling, as proposed in [SimMIM](https://arxiv.org/abs/2111.09886). <Tip> Note that we provide a script to pre-train this model on custom data in our [examples directory](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
406_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinformaskedimagemodeling
.md
</Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwinConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
406_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinformaskedimagemodeling
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
406_6_2