source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#usage-tips
.md
* To use Pop2Piano, you will need to install the 🤗 Transformers library, as well as the following third party modules: ```bash pip install pretty-midi==0.2.9 essentia==2.1b6.dev1034 librosa scipy ``` Please note that you may need to restart your runtime after installation. * Pop2Piano is an Encoder-Decoder based model like T5. * Pop2Piano can be used to generate midi-audio files for a given audio sequence.
342_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#usage-tips
.md
* Pop2Piano can be used to generate midi-audio files for a given audio sequence. * Choosing different composers in `Pop2PianoForConditionalGeneration.generate()` can lead to variety of different results. * Setting the sampling rate to 44.1 kHz when loading the audio file can give good performance. * Though Pop2Piano was mainly trained on Korean Pop music, it also does pretty well on other Western Pop or Hip Hop songs.
342_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
- Example using HuggingFace Dataset: ```python >>> from datasets import load_dataset >>> from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor >>> model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano") >>> processor = Pop2PianoProcessor.from_pretrained("sweetcocoa/pop2piano") >>> ds = load_dataset("sweetcocoa/pop2piano_ci", split="test")
342_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
>>> inputs = processor( ... audio=ds["audio"][0]["array"], sampling_rate=ds["audio"][0]["sampling_rate"], return_tensors="pt" ... ) >>> model_output = model.generate(input_features=inputs["input_features"], composer="composer1") >>> tokenizer_output = processor.batch_decode( ... token_ids=model_output, feature_extractor_output=inputs ... )["pretty_midi_objects"][0] >>> tokenizer_output.write("./Outputs/midi_output.mid") ``` - Example using your own audio file: ```python >>> import librosa
342_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
``` - Example using your own audio file: ```python >>> import librosa >>> from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor
342_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
>>> audio, sr = librosa.load("<your_audio_file_here>", sr=44100) # feel free to change the sr to a suitable value. >>> model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano") >>> processor = Pop2PianoProcessor.from_pretrained("sweetcocoa/pop2piano")
342_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
>>> inputs = processor(audio=audio, sampling_rate=sr, return_tensors="pt") >>> model_output = model.generate(input_features=inputs["input_features"], composer="composer1") >>> tokenizer_output = processor.batch_decode( ... token_ids=model_output, feature_extractor_output=inputs ... )["pretty_midi_objects"][0] >>> tokenizer_output.write("./Outputs/midi_output.mid") ``` - Example of processing multiple audio files in batch: ```python >>> import librosa
342_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
``` - Example of processing multiple audio files in batch: ```python >>> import librosa >>> from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor
342_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
>>> # feel free to change the sr to a suitable value. >>> audio1, sr1 = librosa.load("<your_first_audio_file_here>", sr=44100) >>> audio2, sr2 = librosa.load("<your_second_audio_file_here>", sr=44100) >>> model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano") >>> processor = Pop2PianoProcessor.from_pretrained("sweetcocoa/pop2piano")
342_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
>>> inputs = processor(audio=[audio1, audio2], sampling_rate=[sr1, sr2], return_attention_mask=True, return_tensors="pt") >>> # Since we now generating in batch(2 audios) we must pass the attention_mask >>> model_output = model.generate( ... input_features=inputs["input_features"], ... attention_mask=inputs["attention_mask"], ... composer="composer1", ... ) >>> tokenizer_output = processor.batch_decode( ... token_ids=model_output, feature_extractor_output=inputs
342_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
... ) >>> tokenizer_output = processor.batch_decode( ... token_ids=model_output, feature_extractor_output=inputs ... )["pretty_midi_objects"]
342_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
>>> # Since we now have 2 generated MIDI files >>> tokenizer_output[0].write("./Outputs/midi_output1.mid") >>> tokenizer_output[1].write("./Outputs/midi_output2.mid") ``` - Example of processing multiple audio files in batch (Using `Pop2PianoFeatureExtractor` and `Pop2PianoTokenizer`): ```python >>> import librosa >>> from transformers import Pop2PianoForConditionalGeneration, Pop2PianoFeatureExtractor, Pop2PianoTokenizer
342_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
>>> # feel free to change the sr to a suitable value. >>> audio1, sr1 = librosa.load("<your_first_audio_file_here>", sr=44100) >>> audio2, sr2 = librosa.load("<your_second_audio_file_here>", sr=44100) >>> model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano") >>> feature_extractor = Pop2PianoFeatureExtractor.from_pretrained("sweetcocoa/pop2piano") >>> tokenizer = Pop2PianoTokenizer.from_pretrained("sweetcocoa/pop2piano")
342_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
>>> inputs = feature_extractor( ... audio=[audio1, audio2], ... sampling_rate=[sr1, sr2], ... return_attention_mask=True, ... return_tensors="pt", ... ) >>> # Since we now generating in batch(2 audios) we must pass the attention_mask >>> model_output = model.generate( ... input_features=inputs["input_features"], ... attention_mask=inputs["attention_mask"], ... composer="composer1", ... ) >>> tokenizer_output = tokenizer.batch_decode(
342_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
... composer="composer1", ... ) >>> tokenizer_output = tokenizer.batch_decode( ... token_ids=model_output, feature_extractor_output=inputs ... )["pretty_midi_objects"]
342_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#examples
.md
>>> # Since we now have 2 generated MIDI files >>> tokenizer_output[0].write("./Outputs/midi_output1.mid") >>> tokenizer_output[1].write("./Outputs/midi_output2.mid") ```
342_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoconfig
.md
This is the configuration class to store the configuration of a [`Pop2PianoForConditionalGeneration`]. It is used to instantiate a Pop2PianoForConditionalGeneration model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Pop2Piano [sweetcocoa/pop2piano](https://huggingface.co/sweetcocoa/pop2piano) architecture.
342_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoconfig
.md
Pop2Piano [sweetcocoa/pop2piano](https://huggingface.co/sweetcocoa/pop2piano) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Arguments: vocab_size (`int`, *optional*, defaults to 2400): Vocabulary size of the `Pop2PianoForConditionalGeneration` model. Defines the number of different tokens
342_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoconfig
.md
Vocabulary size of the `Pop2PianoForConditionalGeneration` model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Pop2PianoForConditionalGeneration`]. composer_vocab_size (`int`, *optional*, defaults to 21): Denotes the number of composers. d_model (`int`, *optional*, defaults to 512): Size of the encoder layers and the pooler layer. d_kv (`int`, *optional*, defaults to 64):
342_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoconfig
.md
Size of the encoder layers and the pooler layer. d_kv (`int`, *optional*, defaults to 64): Size of the key, query, value projections per attention head. The `inner_dim` of the projection layer will be defined as `num_heads * d_kv`. d_ff (`int`, *optional*, defaults to 2048): Size of the intermediate feed forward layer in each `Pop2PianoBlock`. num_layers (`int`, *optional*, defaults to 6): Number of hidden layers in the Transformer encoder. num_decoder_layers (`int`, *optional*):
342_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoconfig
.md
Number of hidden layers in the Transformer encoder. num_decoder_layers (`int`, *optional*): Number of hidden layers in the Transformer decoder. Will use the same value as `num_layers` if not set. num_heads (`int`, *optional*, defaults to 8): Number of attention heads for each attention layer in the Transformer encoder. relative_attention_num_buckets (`int`, *optional*, defaults to 32): The number of buckets to use for each attention layer.
342_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoconfig
.md
relative_attention_num_buckets (`int`, *optional*, defaults to 32): The number of buckets to use for each attention layer. relative_attention_max_distance (`int`, *optional*, defaults to 128): The maximum distance of the longer sequences for the bucket separation. dropout_rate (`float`, *optional*, defaults to 0.1): The ratio for all dropout layers. layer_norm_epsilon (`float`, *optional*, defaults to 1e-6): The epsilon used by the layer normalization layers.
342_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoconfig
.md
layer_norm_epsilon (`float`, *optional*, defaults to 1e-6): The epsilon used by the layer normalization layers. initializer_factor (`float`, *optional*, defaults to 1.0): A factor for initializing all weight matrices (should be kept to 1.0, used internally for initialization testing). feed_forward_proj (`string`, *optional*, defaults to `"gated-gelu"`): Type of feed forward layer to be used. Should be one of `"relu"` or `"gated-gelu"`. use_cache (`bool`, *optional*, defaults to `True`):
342_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoconfig
.md
use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). dense_act_fn (`string`, *optional*, defaults to `"relu"`): Type of Activation Function to be used in `Pop2PianoDenseActDense` and in `Pop2PianoDenseGatedActDense`.
342_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianofeatureextractor
.md
No docstring available for Pop2PianoFeatureExtractor Methods: __call__
342_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoforconditionalgeneration
.md
Pop2Piano Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
342_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoforconditionalgeneration
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Pop2PianoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
342_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoforconditionalgeneration
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - generate
342_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianotokenizer
.md
No docstring available for Pop2PianoTokenizer Methods: __call__
342_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2pianoprocessor
.md
No docstring available for Pop2PianoProcessor Methods: __call__
342_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
343_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
343_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#overview
.md
The FalconMamba model was proposed by TII UAE (Technology Innovation Institute) in their release. The abstract from the paper is the following:
343_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#overview
.md
*We present FalconMamba, a new base large language model based on the novel Mamba architecture. FalconMamba is trained on 5.8 trillion tokens with carefully selected data mixtures. As a pure Mamba-based model, FalconMamba surpasses leading open-weight models based on Transformers, such as Mistral 7B, Llama3 8B, and Falcon2 11B. It is on par with Gemma 7B and outperforms models with different architecture designs, such as RecurrentGemma 9B. Currently, FalconMamba is the best-performing Mamba model in the
343_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#overview
.md
different architecture designs, such as RecurrentGemma 9B. Currently, FalconMamba is the best-performing Mamba model in the literature at this scale, surpassing both existing Mamba and hybrid Mamba-Transformer models.
343_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#overview
.md
Due to its architecture, FalconMamba is significantly faster at inference and requires substantially less memory for long sequence generation. Despite recent studies suggesting that hybrid Mamba-Transformer models outperform pure architecture designs, we argue and demonstrate that the pure Mamba design can achieve similar, even superior results compared to the hybrid design. We make the weights of our implementation of FalconMamba publicly available under a permissive license.* Tips:
343_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#overview
.md
Tips: - FalconMamba is mostly based on Mamba architecture, the same [tips and best practices](./mamba) would be relevant here. The model has been trained on approximtely 6T tokens consisting a mixture of many data sources such as RefineWeb, Cosmopedia and Math data. For more details about the training procedure and the architecture, have a look at [the technical paper of FalconMamba]() (coming soon).
343_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#usage
.md
Below we demonstrate how to use the model: ```python from transformers import FalconMambaForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b") model = FalconMambaForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b") input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]
343_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#usage
.md
input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"] out = model.generate(input_ids, max_new_tokens=10) print(tokenizer.batch_decode(out)) ``` The architecture is also compatible with `torch.compile` for faster generation: ```python from transformers import FalconMambaForCausalLM, AutoTokenizer import torch
343_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#usage
.md
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b") model = FalconMambaForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b", torch_dtype=torch.bfloat16).to(0) model = torch.compile(model) input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]
343_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#usage
.md
input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"] out = model.generate(input_ids, max_new_tokens=10) print(tokenizer.batch_decode(out)) ``` If you have access to a GPU that is compatible with `bitsandbytes`, you can also quantize the model in 4-bit precision: ```python from transformers import FalconMambaForCausalLM, AutoTokenizer, BitsAndBytesConfig import torch
343_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#usage
.md
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b") quantization_config = BitsAndBytesConfig(load_in_4bit=True) model = FalconMambaForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b", quantization_config=quantization_config) input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]
343_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#usage
.md
input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"] out = model.generate(input_ids, max_new_tokens=10) print(tokenizer.batch_decode(out)) ``` You can also play with the instruction fine-tuned model: ```python from transformers import FalconMambaForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = FalconMambaForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct")
343_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#usage
.md
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_ids = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True).input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ```
343_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
This is the configuration class to store the configuration of a [`FalconMambaModel`]. It is used to instantiate a FALCON_MAMBA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FALCON_MAMBA [tiiuae/falcon-mamba-7b](https://huggingface.co/tiiuae/falcon-mamba-7b) architecture.
343_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
[tiiuae/falcon-mamba-7b](https://huggingface.co/tiiuae/falcon-mamba-7b) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50280): Vocabulary size of the FALCON_MAMBA model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`FalconMambaModel`].
343_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
`inputs_ids` passed when calling [`FalconMambaModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the embeddings and hidden states. state_size (`int`, *optional*, defaults to 16): shape of the state space latents. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the model. layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): The epsilon to use in the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 0):
343_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
The epsilon to use in the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 0): Padding token id. bos_token_id (`int`, *optional*, defaults to 0): The id of the beginning of sentence token in the vocabulary. eos_token_id (`int`, *optional*, defaults to 0): The id of the end of sentence token in the vocabulary. expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size.
343_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size. conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel. use_bias (`bool`, *optional*, defaults to `False`): Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block use_conv_bias (`bool`, *optional*, defaults to `True`): Whether or not to use bias in the convolution layer of the mixer block. hidden_act (`str`, *optional*, defaults to `"silu"`):
343_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
Whether or not to use bias in the convolution layer of the mixer block. hidden_act (`str`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. initializer_range (`float`, *optional*, defaults to 0.1): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. residual_in_fp32 (`bool`, *optional*, defaults to `True`):
343_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
residual_in_fp32 (`bool`, *optional*, defaults to `True`): Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`): Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)` time_step_scale (`float`, *optional*, defaults to 1.0): Scale used used to scale `dt_proj.bias`.
343_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
time_step_scale (`float`, *optional*, defaults to 1.0): Scale used used to scale `dt_proj.bias`. time_step_min (`float`, *optional*, defaults to 0.001): Minimum `time_step` used to bound `dt_proj.bias`. time_step_max (`float`, *optional*, defaults to 0.1): Maximum `time_step` used to bound `dt_proj.bias`. time_step_init_scheme (`float`, *optional*, defaults to `"random"`): Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]`
343_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]` time_step_floor (`float`, *optional*, defaults to 0.0001): Minimum clamping value of the `dt_proj.bias` layer initialization. rescale_prenorm_residual (`bool`, *optional*, defaults to `False`): Whether or not to rescale `out_proj` weights when initializing. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the cache should be used. use_mambapy (`bool`, *optional*, defaults to `False`):
343_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
Whether or not the cache should be used. use_mambapy (`bool`, *optional*, defaults to `False`): Determines the fallback strategy during training if the CUDA-based official implementation of FalconMamba is not avaiable. If `True`, the falcon_mamba.py implementation is used. If `False`, the naive and slower implementation is used. Consider switching to the naive version if memory is limited. mixer_rms_eps (`float`, *optional*, defaults to 1e-06):
343_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
mixer_rms_eps (`float`, *optional*, defaults to 1e-06): The RMS norm epsilon value that is used in the Mixer RMS norm for B, C and dt states. Example: ```python >>> from transformers import FalconMambaConfig, FalconMambaModel
343_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambaconfig
.md
>>> # Initializing a FalconMamba configuration >>> configuration = FalconMambaConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = FalconMambaModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
343_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambamodel
.md
The bare FALCONMAMBA Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
343_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambamodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FalconMambaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
343_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambamodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
343_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambalmheadmodel
.md
The FALCONMAMBA Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
343_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambalmheadmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FalconMambaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
343_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon_mamba.md
https://huggingface.co/docs/transformers/en/model_doc/falcon_mamba/#falconmambalmheadmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
343_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
344_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
344_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#overview
.md
The ConvNeXt V2 model was proposed in [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. ConvNeXt V2 is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, and a successor of [ConvNeXT](convnext). The abstract from the paper is the following:
344_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#overview
.md
*Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However,
344_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#overview
.md
labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural
344_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#overview
.md
to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to
344_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#overview
.md
V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.*
344_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnextv2_architecture.png" alt="drawing" width="600"/> <small> ConvNeXt V2 architecture. Taken from the <a href="https://arxiv.org/abs/2301.00808">original paper</a>.</small> This model was contributed by [adirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/facebookresearch/ConvNeXt-V2).
344_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXt V2. <PipelineTag pipeline="image-classification"/> - [`ConvNextV2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
344_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#resources
.md
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
344_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2config
.md
This is the configuration class to store the configuration of a [`ConvNextV2Model`]. It is used to instantiate an ConvNeXTV2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ConvNeXTV2 [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) architecture.
344_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2config
.md
[facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_channels (`int`, *optional*, defaults to 3): The number of input channels. patch_size (`int`, *optional*, defaults to 4): Patch size to use in the patch embedding layer. num_stages (`int`, *optional*, defaults to 4):
344_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2config
.md
Patch size to use in the patch embedding layer. num_stages (`int`, *optional*, defaults to 4): The number of stages in the model. hidden_sizes (`List[int]`, *optional*, defaults to `[96, 192, 384, 768]`): Dimensionality (hidden size) at each stage. depths (`List[int]`, *optional*, defaults to `[3, 3, 9, 3]`): Depth (number of blocks) for each stage. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
344_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2config
.md
Depth (number of blocks) for each stage. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in each block. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12):
344_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2config
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. drop_path_rate (`float`, *optional*, defaults to 0.0): The drop rate for stochastic depth. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc.
344_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2config
.md
If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
344_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2config
.md
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. Example: ```python >>> from transformers import ConvNeXTV2Config, ConvNextV2Model
344_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2config
.md
>>> # Initializing a ConvNeXTV2 convnextv2-tiny-1k-224 style configuration >>> configuration = ConvNeXTV2Config() >>> # Initializing a model (with random weights) from the convnextv2-tiny-1k-224 style configuration >>> model = ConvNextV2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
344_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2model
.md
The bare ConvNextV2 model outputting raw features without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ConvNextV2Config`]): Model configuration class with all the parameters of the model.
344_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2model
.md
behavior. Parameters: config ([`ConvNextV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
344_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2forimageclassification
.md
ConvNextV2 Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ConvNextV2Config`]): Model configuration class with all the parameters of the model.
344_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#convnextv2forimageclassification
.md
behavior. Parameters: config ([`ConvNextV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
344_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#tfconvnextv2model
.md
No docstring available for TFConvNextV2Model Methods: call
344_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/convnextv2.md
https://huggingface.co/docs/transformers/en/model_doc/convnextv2/#tfconvnextv2forimageclassification
.md
No docstring available for TFConvNextV2ForImageClassification Methods: call
344_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
345_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/
.md
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. specific language governing permissions and limitations under the License. -->
345_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#overview
.md
The Donut model was proposed in [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. Donut consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform document understanding tasks such as document image classification, form understanding and visual question answering.
345_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#overview
.md
tasks such as document image classification, form understanding and visual question answering. The abstract from the paper is the following:
345_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#overview
.md
*Understanding document images (e.g., invoices) is a core but challenging task since it requires complex functions such as reading text and a holistic understanding of the document. Current Visual Document Understanding (VDU) methods outsource the task of reading text to off-the-shelf Optical Character Recognition (OCR) engines and focus on the understanding task with the OCR outputs. Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using
345_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#overview
.md
Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using OCR; 2) inflexibility of OCR models on languages or types of document; 3) OCR error propagation to the subsequent process. To address these issues, in this paper, we introduce a novel OCR-free VDU model named Donut, which stands for Document understanding transformer. As the first step in OCR-free VDU research, we propose a simple architecture (i.e., Transformer) with a pre-training
345_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#overview
.md
As the first step in OCR-free VDU research, we propose a simple architecture (i.e., Transformer) with a pre-training objective (i.e., cross-entropy loss). Donut is conceptually simple yet effective. Through extensive experiments and analyses, we show a simple OCR-free VDU model, Donut, achieves state-of-the-art performances on various VDU tasks in terms of both speed and accuracy. In addition, we offer a synthetic data generator that helps the model pre-training to be flexible in various languages and
345_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#overview
.md
In addition, we offer a synthetic data generator that helps the model pre-training to be flexible in various languages and domains.*
345_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg" alt="drawing" width="600"/> <small> Donut high-level overview. Taken from the <a href="https://arxiv.org/abs/2111.15664">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/clovaai/donut).
345_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#usage-tips
.md
- The quickest way to get started with Donut is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Donut), which show how to use the model at inference time as well as fine-tuning on custom data. - Donut is always used within the [VisionEncoderDecoder](vision-encoder-decoder) framework.
345_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
Donut's [`VisionEncoderDecoder`] model accepts images as input and makes use of [`~generation.GenerationMixin.generate`] to autoregressively generate text given the input image. The [`DonutImageProcessor`] class is responsible for preprocessing the input image and [`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`] decodes the generated target tokens to the target string. The [`DonutProcessor`] wraps [`DonutImageProcessor`] and [`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`]
345_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
[`DonutProcessor`] wraps [`DonutImageProcessor`] and [`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`] into a single instance to both extract the input features and decode the predicted token ids. - Step-by-step Document Image Classification ```py >>> import re
345_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip") >>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT
345_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # load document image >>> dataset = load_dataset("hf-internal-testing/example-documents", split="test") >>> image = dataset[1]["image"] >>> # prepare decoder inputs >>> task_prompt = "<s_rvlcdip>" >>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids >>> pixel_values = processor(image, return_tensors="pt").pixel_values
345_3_3