source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2model
.md
The bare MobileNetV2 model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileNetV2Config`]): Model configuration class with all the parameters of the model.
412_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2model
.md
behavior. Parameters: config ([`MobileNetV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
412_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2forimageclassification
.md
MobileNetV2 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileNetV2Config`]): Model configuration class with all the parameters of the model.
412_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2forimageclassification
.md
behavior. Parameters: config ([`MobileNetV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
412_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2forsemanticsegmentation
.md
MobileNetV2 model with a semantic segmentation head on top, e.g. for Pascal VOC. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileNetV2Config`]): Model configuration class with all the parameters of the model.
412_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2forsemanticsegmentation
.md
behavior. Parameters: config ([`MobileNetV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
412_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
413_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. :warning: Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
413_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#overview
.md
The MusicGen Melody model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez.
413_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#overview
.md
MusicGen Melody is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or *audio codes*, conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec, to recover the audio
413_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#overview
.md
hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec, to recover the audio waveform.
413_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#overview
.md
Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g. hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass. The abstract from the paper is the following:
413_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#overview
.md
*We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, while
413_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#overview
.md
hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the
413_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#overview
.md
baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.*
413_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#overview
.md
This model was contributed by [ylacombe](https://huggingface.co/ylacombe). The original code can be found [here](https://github.com/facebookresearch/audiocraft). The pre-trained checkpoints can be found on the [Hugging Face Hub](https://huggingface.co/models?sort=downloads&search=facebook%2Fmusicgen).
413_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#difference-with-musicgenhttpshuggingfacecodocstransformersmainenmodeldocmusicgen
.md
There are two key differences with MusicGen: 1. The audio prompt is used here as a conditional signal for the generated audio sample, whereas it's used for audio continuation in [MusicGen](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen). 2. Conditional text and audio signals are concatenated to the decoder's hidden states instead of being used as a cross-attention signal, as in MusicGen.
413_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#generation
.md
MusicGen Melody is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default, and can be explicitly specified by setting `do_sample=True` in the call to [`MusicgenMelodyForConditionalGeneration.generate`], or by overriding the model's generation config (see below).
413_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#generation
.md
Transformers supports both mono (1-channel) and stereo (2-channel) variants of MusicGen Melody. The mono channel versions generate a single set of codebooks. The stereo versions generate 2 sets of codebooks, 1 for each channel (left/right), and each set of codebooks is decoded independently through the audio compression model. The audio streams for each channel are combined to give the final stereo output.
413_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
The model can generate an audio sample conditioned on a text and an audio prompt through use of the [`MusicgenMelodyProcessor`] to pre-process the inputs. In the following examples, we load an audio file using the 🤗 Datasets library, which can be pip installed through the command below: ``` pip install --upgrade pip pip install datasets[audio] ``` The audio file we are about to use is loaded as follows: ```python >>> from datasets import load_dataset
413_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
>>> dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True) >>> sample = next(iter(dataset))["audio"] ``` The audio prompt should ideally be free of the low-frequency signals usually produced by instruments such as drums and bass. The [Demucs](https://github.com/adefossez/demucs/tree/main) model can be used to separate vocals and other signals from the drums and bass components.
413_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
If you wish to use Demucs, you first need to follow the installation steps [here](https://github.com/adefossez/demucs/tree/main?tab=readme-ov-file#for-musicians) before using the following snippet: ```python from demucs import pretrained from demucs.apply import apply_model from demucs.audio import convert_audio import torch
413_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
wav = torch.tensor(sample["array"]).to(torch.float32) demucs = pretrained.get_model('htdemucs') wav = convert_audio(wav[None], sample["sampling_rate"], demucs.samplerate, demucs.audio_channels) wav = apply_model(demucs, wav[None]) ``` You can then use the following snippet to generate music: ```python >>> from transformers import AutoProcessor, MusicgenMelodyForConditionalGeneration
413_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
>>> processor = AutoProcessor.from_pretrained("facebook/musicgen-melody") >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody")
413_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
>>> inputs = processor( ... audio=wav, ... sampling_rate=demucs.samplerate, ... text=["80s blues track with groovy saxophone"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` You can also pass the audio signal directly without using Demucs, although the quality of the generation will probably be degraded: ```python
413_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
```python >>> from transformers import AutoProcessor, MusicgenMelodyForConditionalGeneration
413_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
>>> processor = AutoProcessor.from_pretrained("facebook/musicgen-melody") >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody")
413_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
>>> inputs = processor( ... audio=sample["array"], ... sampling_rate=sample["sampling_rate"], ... text=["80s blues track with groovy saxophone"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ```
413_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` The audio outputs are a three-dimensional Torch tensor of shape `(batch_size, num_channels, sequence_length)`. To listen to the generated audio samples, you can either play them in an ipynb notebook: ```python from IPython.display import Audio
413_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#audio-conditional-generation
.md
sampling_rate = model.config.audio_encoder.sampling_rate Audio(audio_values[0].numpy(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `soundfile`: ```python >>> import soundfile as sf >>> sampling_rate = model.config.audio_encoder.sampling_rate >>> sf.write("musicgen_out.wav", audio_values[0].T.numpy(), sampling_rate) ```
413_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#text-only-conditional-generation
.md
The same [`MusicgenMelodyProcessor`] can be used to pre-process a text-only prompt. ```python >>> from transformers import AutoProcessor, MusicgenMelodyForConditionalGeneration >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-melody") >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody")
413_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#text-only-conditional-generation
.md
>>> inputs = processor( ... text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ```
413_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#text-only-conditional-generation
.md
The `guidance_scale` is used in classifier free guidance (CFG), setting the weighting between the conditional logits (which are predicted from the text prompts) and the unconditional logits (which are predicted from an unconditional or 'null' prompt). Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer audio quality. CFG is enabled by setting `guidance_scale > 1`. For best results, use `guidance_scale=3`
413_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#text-only-conditional-generation
.md
the expense of poorer audio quality. CFG is enabled by setting `guidance_scale > 1`. For best results, use `guidance_scale=3` (default).
413_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#text-only-conditional-generation
.md
You can also generate in batch: ```python >>> from transformers import AutoProcessor, MusicgenMelodyForConditionalGeneration >>> from datasets import load_dataset
413_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#text-only-conditional-generation
.md
>>> processor = AutoProcessor.from_pretrained("facebook/musicgen-melody") >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody") >>> # take the first quarter of the audio sample >>> sample_1 = sample["array"][: len(sample["array"]) // 4] >>> # take the first half of the audio sample >>> sample_2 = sample["array"][: len(sample["array"]) // 2]
413_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#text-only-conditional-generation
.md
>>> # take the first half of the audio sample >>> sample_2 = sample["array"][: len(sample["array"]) // 2] >>> inputs = processor( ... audio=[sample_1, sample_2], ... sampling_rate=sample["sampling_rate"], ... text=["80s blues track with groovy saxophone", "90s rock song with loud guitars and heavy drums"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ```
413_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#unconditional-generation
.md
The inputs for unconditional (or 'null') generation can be obtained through the method [`MusicgenMelodyProcessor.get_unconditional_inputs`]: ```python >>> from transformers import MusicgenMelodyForConditionalGeneration, MusicgenMelodyProcessor >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody") >>> unconditional_inputs = MusicgenMelodyProcessor.from_pretrained("facebook/musicgen-melody").get_unconditional_inputs(num_samples=1)
413_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#unconditional-generation
.md
>>> audio_values = model.generate(**unconditional_inputs, do_sample=True, max_new_tokens=256) ```
413_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#generation-configuration
.md
The default parameters that control the generation process, such as sampling, guidance scale and number of generated tokens, can be found in the model's generation config, and updated as desired: ```python >>> from transformers import MusicgenMelodyForConditionalGeneration >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody") >>> # inspect the default generation config >>> model.generation_config
413_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#generation-configuration
.md
>>> # inspect the default generation config >>> model.generation_config >>> # increase the guidance scale to 4.0 >>> model.generation_config.guidance_scale = 4.0 >>> # decrease the max length to 256 tokens >>> model.generation_config.max_length = 256 ``` Note that any arguments passed to the generate method will **supersede** those in the generation config, so setting `do_sample=False` in the call to generate will supersede the setting of `model.generation_config.do_sample` in the generation config.
413_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#model-structure
.md
The MusicGen model can be de-composed into three distinct stages: 1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5. 2. MusicGen Melody decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations 3. Audio decoder: used to recover the audio waveform from the audio tokens predicted by the decoder.
413_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#model-structure
.md
Thus, the MusicGen model can either be used as a standalone decoder model, corresponding to the class [`MusicgenMelodyForCausalLM`], or as a composite model that includes the text encoder and audio encoder, corresponding to the class [`MusicgenMelodyForConditionalGeneration`]. If only the decoder needs to be loaded from the pre-trained checkpoint, it can be loaded by first specifying the correct config, or be accessed through the `.decoder` attribute of the composite model: ```python
413_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#model-structure
.md
```python >>> from transformers import AutoConfig, MusicgenMelodyForCausalLM, MusicgenMelodyForConditionalGeneration
413_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#model-structure
.md
>>> # Option 1: get decoder config and pass to `.from_pretrained` >>> decoder_config = AutoConfig.from_pretrained("facebook/musicgen-melody").decoder >>> decoder = MusicgenMelodyForCausalLM.from_pretrained("facebook/musicgen-melody", **decoder_config.to_dict())
413_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#model-structure
.md
>>> # Option 2: load the entire composite model, but only return the decoder >>> decoder = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody").decoder ```
413_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#model-structure
.md
>>> decoder = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody").decoder ``` Since the text encoder and audio encoder models are frozen during training, the MusicGen decoder [`MusicgenMelodyForCausalLM`] can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can be combined with the frozen text encoder and audio encoder to recover the composite [`MusicgenMelodyForConditionalGeneration`] model.
413_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#checkpoint-conversion
.md
- After downloading the original checkpoints from [here](https://github.com/facebookresearch/audiocraft/blob/main/docs/MUSICGEN.md#importing--exporting-models), you can convert them using the **conversion script** available at `src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py` with the following command: ```bash python src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py \ --checkpoint="facebook/musicgen-melody" --pytorch_dump_folder /output/path
413_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#checkpoint-conversion
.md
--checkpoint="facebook/musicgen-melody" --pytorch_dump_folder /output/path ``` Tips: * MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model. * Sampling mode tends to deliver better results than greedy - you can toggle sampling with the variable `do_sample` in the call to [`MusicgenMelodyForConditionalGeneration.generate`]
413_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodydecoderconfig
.md
This is the configuration class to store the configuration of an [`MusicgenMelodyDecoder`]. It is used to instantiate a Musicgen Melody decoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Musicgen Melody [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) architecture.
413_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodydecoderconfig
.md
[facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 2048): Vocabulary size of the MusicgenMelodyDecoder model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MusicgenMelodyDecoder`].
413_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodydecoderconfig
.md
represented by the `inputs_ids` passed when calling [`MusicgenMelodyDecoder`]. max_position_embeddings (`int`, *optional*, defaults to 2048): The maximum sequence length that this model might ever be used with. Typically, set this to something large just in case (e.g., 512 or 1024 or 2048). num_hidden_layers (`int`, *optional*, defaults to 24): Number of decoder layers. ffn_dim (`int`, *optional*, defaults to 4096):
413_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodydecoderconfig
.md
num_hidden_layers (`int`, *optional*, defaults to 24): Number of decoder layers. ffn_dim (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer block. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer block. layerdrop (`float`, *optional*, defaults to 0.0):
413_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodydecoderconfig
.md
Number of attention heads for each attention layer in the Transformer block. layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. use_cache (`bool`, *optional*, defaults to `True`): Whether the model should return the last key/values attentions (not used by all models) activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
413_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodydecoderconfig
.md
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the decoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the layers and the pooler layer. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, text_encoder, and pooler.
413_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodydecoderconfig
.md
The dropout probability for all fully connected layers in the embeddings, text_encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. initializer_factor (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
413_10_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodydecoderconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. scale_embedding (`bool`, *optional*, defaults to `False`): Scale embeddings by diving by sqrt(hidden_size). num_codebooks (`int`, *optional*, defaults to 4): The number of parallel codebooks forwarded to the model. audio_channels (`int`, *optional*, defaults to 1): Number of audio channels used by the model (either mono or stereo). Stereo models generate a separate
413_10_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodydecoderconfig
.md
Number of audio channels used by the model (either mono or stereo). Stereo models generate a separate audio stream for the left/right output channels. Mono models generate a single audio stream output. pad_token_id (`int`, *optional*, defaults to 2048): The id of the *padding* token. bos_token_id (`int`, *optional*, defaults to 2048): The id of the *beginning-of-sequence* token. eos_token_id (`int`, *optional*): The id of the *end-of-sequence* token.
413_10_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodydecoderconfig
.md
eos_token_id (`int`, *optional*): The id of the *end-of-sequence* token. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether to tie word embeddings with the text encoder.
413_10_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyprocessor
.md
Constructs a MusicGen Melody processor which wraps a Wav2Vec2 feature extractor - for raw audio waveform processing - and a T5 tokenizer into a single processor class. [`MusicgenProcessor`] offers all the functionalities of [`MusicgenMelodyFeatureExtractor`] and [`T5Tokenizer`]. See [`~MusicgenProcessor.__call__`] and [`~MusicgenProcessor.decode`] for more information. Args: feature_extractor (`MusicgenMelodyFeatureExtractor`):
413_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyprocessor
.md
Args: feature_extractor (`MusicgenMelodyFeatureExtractor`): An instance of [`MusicgenMelodyFeatureExtractor`]. The feature extractor is a required input. tokenizer (`T5Tokenizer`): An instance of [`T5Tokenizer`]. The tokenizer is a required input. Methods: get_unconditional_inputs
413_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyfeatureextractor
.md
Constructs a MusicgenMelody feature extractor. This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. This class extracts chroma features from audio processed by [Demucs](https://github.com/adefossez/demucs/tree/main) or directly from raw audio waveform. Args: feature_size (`int`, *optional*, defaults to 12):
413_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyfeatureextractor
.md
directly from raw audio waveform. Args: feature_size (`int`, *optional*, defaults to 12): The feature dimension of the extracted features. sampling_rate (`int`, *optional*, defaults to 32000): The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). hop_length (`int`, *optional*, defaults to 4096): Length of the overlaping windows for the STFT used to obtain the Mel Frequency coefficients. chunk_length (`int`, *optional*, defaults to 30):
413_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyfeatureextractor
.md
chunk_length (`int`, *optional*, defaults to 30): The maximum number of chunks of `sampling_rate` samples used to trim and pad longer or shorter audio sequences. n_fft (`int`, *optional*, defaults to 16384): Size of the Fourier transform. num_chroma (`int`, *optional*, defaults to 12): Number of chroma bins to use. padding_value (`float`, *optional*, defaults to 0.0): Padding value used to pad the audio. return_attention_mask (`bool`, *optional*, defaults to `False`):
413_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyfeatureextractor
.md
Padding value used to pad the audio. return_attention_mask (`bool`, *optional*, defaults to `False`): Whether to return the attention mask. Can be overwritten when calling the feature extractor. [What are attention masks?](../glossary#attention-mask) <Tip> For Whisper models, `attention_mask` should always be passed for batched inference, to avoid subtle bugs. </Tip> stem_indices (`List[int]`, *optional*, defaults to `[3, 2]`): Stem channels to extract if demucs outputs are passed.
413_12_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyconfig
.md
This is the configuration class to store the configuration of a [`MusicgenMelodyModel`]. It is used to instantiate a Musicgen Melody model according to the specified arguments, defining the text encoder, audio encoder and Musicgen Melody decoder configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the Musicgen Melody [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) architecture.
413_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyconfig
.md
[facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_chroma (`int`, *optional*, defaults to 12): Number of chroma bins to use. chroma_length (`int`, *optional*, defaults to 235):
413_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyconfig
.md
chroma_length (`int`, *optional*, defaults to 235): Maximum chroma duration if audio is used to condition the model. Corresponds to the maximum duration used during training. kwargs (*optional*): Dictionary of keyword arguments. Notably: - **text_encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the text encoder config. - **audio_encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the audio encoder config.
413_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyconfig
.md
defines the audio encoder config. - **decoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the decoder config. Example: ```python >>> from transformers import ( ... MusicgenMelodyConfig, ... MusicgenMelodyDecoderConfig, ... T5Config, ... EncodecConfig, ... MusicgenMelodyForConditionalGeneration, ... )
413_13_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyconfig
.md
>>> # Initializing text encoder, audio encoder, and decoder model configurations >>> text_encoder_config = T5Config() >>> audio_encoder_config = EncodecConfig() >>> decoder_config = MusicgenMelodyDecoderConfig() >>> configuration = MusicgenMelodyConfig.from_sub_models_config( ... text_encoder_config, audio_encoder_config, decoder_config ... )
413_13_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyconfig
.md
>>> # Initializing a MusicgenMelodyForConditionalGeneration (with random weights) from the facebook/musicgen-melody style configuration >>> model = MusicgenMelodyForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config >>> config_text_encoder = model.config.text_encoder >>> config_audio_encoder = model.config.audio_encoder >>> config_decoder = model.config.decoder
413_13_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyconfig
.md
>>> # Saving the model, including its configuration >>> model.save_pretrained("musicgen_melody-model") >>> # loading model and config from pretrained folder >>> musicgen_melody_config = MusicgenMelodyConfig.from_pretrained("musicgen_melody-model") >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("musicgen_melody-model", config=musicgen_melody_config) ```
413_13_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodymodel
.md
The bare MusicgenMelody decoder model outputting raw hidden-states without any specific head on top. The Musicgen Melody model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is a decoder-only transformer trained on the task of conditional music generation.
413_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodymodel
.md
decoder-only transformer trained on the task of conditional music generation. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
413_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodymodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MusicgenMelodyConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
413_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodymodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
413_14_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyforcausallm
.md
The Musicgen Melody decoder model with a language modelling head on top. The Musicgen Melody model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is a decoder-only transformer trained on the task of conditional music generation. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
413_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyforcausallm
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
413_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyforcausallm
.md
and behavior. Parameters: config ([`MusicgenMelodyConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
413_15_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyforconditionalgeneration
.md
The composite Musicgen Melody model with a text and audio conditional models, a MusicgenMelody decoder and an audio encoder, for music generation tasks with one or both of text and audio prompts. The Musicgen Melody model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is a
413_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyforconditionalgeneration
.md
Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is a decoder-only transformer trained on the task of conditional music generation. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
413_16_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyforconditionalgeneration
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MusicgenMelodyConfig`]): Model configuration class with all the parameters of the model.
413_16_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyforconditionalgeneration
.md
and behavior. Parameters: config ([`MusicgenMelodyConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. text_encoder (`Optional[PreTrainedModel]`, *optional*): Text encoder. audio_encoder (`Optional[PreTrainedModel]`, *optional*): Audio code decoder.
413_16_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen_melody.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen_melody/#musicgenmelodyforconditionalgeneration
.md
audio_encoder (`Optional[PreTrainedModel]`, *optional*): Audio code decoder. decoder (`Optional[MusicgenMelodyForCausalLM]`, *optional*): MusicGen Melody decoder used to generate audio codes. Methods: forward
413_16_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
414_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
414_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#overview
.md
The GPT-J model was released in the [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) repository by Ben Wang and Aran Komatsuzaki. It is a GPT-2-like causal language model trained on [the Pile](https://pile.eleuther.ai/) dataset. This model was contributed by [Stella Biderman](https://huggingface.co/stellaathena).
414_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-tips
.md
- To load [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) in float32 one would need at least 2x model size RAM: 1x for initial weights and another 1x to load the checkpoint. So for GPT-J it would take at least 48GB RAM to just load the model. To reduce the RAM usage there are a few options. The `torch_dtype` argument can be used to initialize the model in half-precision on a CUDA device only. There is also a fp16 branch which stores the fp16 weights,
414_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-tips
.md
which could be used to further minimize the RAM usage: ```python >>> from transformers import GPTJForCausalLM >>> import torch
414_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-tips
.md
>>> device = "cuda" >>> model = GPTJForCausalLM.from_pretrained( ... "EleutherAI/gpt-j-6B", ... revision="float16", ... torch_dtype=torch.float16, ... ).to(device) ``` - The model should fit on 16GB GPU for inference. For training/fine-tuning it would take much more GPU RAM. Adam optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients.
414_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-tips
.md
optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients. So it would need at least 4x model size GPU memory, even with mixed precision as gradient updates are in fp32. This is not including the activations and data batches, which would again require some more GPU RAM. So one should explore solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to
414_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-tips
.md
solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to train/fine-tune the model on TPU and then convert the model to Transformers format for inference. Instructions for that could be found [here](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/howto_finetune.md) - Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra
414_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-tips
.md
- Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra tokens are added for the sake of efficiency on TPUs. To avoid the mismatch between embedding matrix size and vocab size, the tokenizer for [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) contains 143 extra tokens `<|extratoken_1|>... <|extratoken_143|>`, so the `vocab_size` of tokenizer also becomes 50400.
414_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-examples
.md
The [`~generation.GenerationMixin.generate`] method can be used to generate text using GPT-J model. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
414_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-examples
.md
>>> prompt = ( ... "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " ... "previously unexplored valley, in the Andes Mountains. Even more surprising to the " ... "researchers was the fact that the unicorns spoke perfect English." ... ) >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
414_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-examples
.md
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] ``` ...or in float16 precision: ```python >>> from transformers import GPTJForCausalLM, AutoTokenizer >>> import torch
414_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-examples
.md
>>> device = "cuda" >>> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to(device) >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") >>> prompt = ( ... "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " ... "previously unexplored valley, in the Andes Mountains. Even more surprising to the " ... "researchers was the fact that the unicorns spoke perfect English." ... )
414_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#usage-examples
.md
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] ```
414_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT-J. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-generation"/> - Description of [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B).
414_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md
https://huggingface.co/docs/transformers/en/model_doc/gptj/#resources
.md
<PipelineTag pipeline="text-generation"/> - Description of [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B). - A blog on how to [Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker](https://huggingface.co/blog/gptj-sagemaker). - A blog on how to [Accelerate GPT-J inference with DeepSpeed-Inference on GPUs](https://www.philschmid.de/gptj-deepspeed-inference).
414_4_1