source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md
https://huggingface.co/docs/transformers/en/model_doc/myt5/#overview
.md
encodings for all 99 analyzed languages, with the most notable improvements for non-European languages and non-Latin scripts. This, in turn, improves multilingual LM performance and diminishes the perplexity gap throughout diverse languages.*
108_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md
https://huggingface.co/docs/transformers/en/model_doc/myt5/#overview
.md
This model was contributed by [Tomasz Limisiewicz](https://huggingface.co/Tomlim). The original code can be found [here](https://github.com/tomlimi/MYTE).
108_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md
https://huggingface.co/docs/transformers/en/model_doc/myt5/#myt5tokenizer
.md
Construct a MyT5 tokenizer. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): The file containing the byte rewriting rules. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. unk_token (`str`, *optional*, defaults to `"<unk>"`):
108_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md
https://huggingface.co/docs/transformers/en/model_doc/myt5/#myt5tokenizer
.md
The end of sequence token. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. extra_ids (`int`, *optional*, defaults to 125): Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are
108_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md
https://huggingface.co/docs/transformers/en/model_doc/myt5/#myt5tokenizer
.md
Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are indexed from the end of the vocabulary up to beginning ("<extra_id_0>" is the last token in the vocabulary like in ByT5 preprocessing see [here](https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117)).
108_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md
https://huggingface.co/docs/transformers/en/model_doc/myt5/#myt5tokenizer
.md
additional_special_tokens (`List[str]`, *optional*): Additional special tokens used by the tokenizer. Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary Construct a MyT5 tokenizer. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args:
108_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md
https://huggingface.co/docs/transformers/en/model_doc/myt5/#myt5tokenizer
.md
this superclass for more information regarding those methods. Args: vocab_file (`str`): The file containing the byte rewriting rules. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`):
108_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md
https://huggingface.co/docs/transformers/en/model_doc/myt5/#myt5tokenizer
.md
token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. extra_ids (`int`, *optional*, defaults to 125): Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are indexed from the end of the vocabulary up to beginning ("<extra_id_0>" is the last token in the vocabulary
108_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md
https://huggingface.co/docs/transformers/en/model_doc/myt5/#myt5tokenizer
.md
indexed from the end of the vocabulary up to beginning ("<extra_id_0>" is the last token in the vocabulary like in ByT5 preprocessing see [here](https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117)). additional_special_tokens (`List[str]`, *optional*): Additional special tokens used by the tokenizer.
108_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
109_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
109_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#overview
.md
The [`VisionEncoderDecoderModel`] can be used to initialize an image-to-text model with any pretrained Transformer-based vision model as the encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit), [Swin](swin)) and any pretrained language model as the decoder (*e.g.* [RoBERTa](roberta), [GPT2](gpt2), [BERT](bert), [DistilBERT](distilbert)). The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for
109_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#overview
.md
The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for example) [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. After such a [`VisionEncoderDecoderModel`] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below for more information).
109_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#overview
.md
for more information). An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates the caption. Another example is optical character recognition. Refer to [TrOCR](trocr), which is an instance of [`VisionEncoderDecoderModel`].
109_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#randomly-initializing-visionencoderdecodermodel-from-model-configurations
.md
[`VisionEncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`ViTModel`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder. ```python >>> from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel >>> config_encoder = ViTConfig() >>> config_decoder = BertConfig()
109_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#randomly-initializing-visionencoderdecodermodel-from-model-configurations
.md
>>> config_encoder = ViTConfig() >>> config_decoder = BertConfig() >>> config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = VisionEncoderDecoderModel(config=config) ```
109_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
.md
[`VisionEncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, *e.g.* [Swin](swin), can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder.
109_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
.md
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [`VisionEncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder).
109_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
.md
To do so, the `VisionEncoderDecoderModel` class provides a [`VisionEncoderDecoderModel.from_encoder_decoder_pretrained`] method. ```python >>> from transformers import VisionEncoderDecoderModel
109_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
.md
>>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "microsoft/swin-base-patch4-window7-224-in22k", "google-bert/bert-base-uncased" ... ) ```
109_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference
.md
To load fine-tuned checkpoints of the `VisionEncoderDecoderModel` class, [`VisionEncoderDecoderModel`] provides the `from_pretrained(...)` method just like any other model architecture in Transformers. To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. ```python >>> import requests >>> from PIL import Image
109_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference
.md
>>> from transformers import GPT2TokenizerFast, ViTImageProcessor, VisionEncoderDecoderModel >>> # load a fine-tuned image captioning model and corresponding tokenizer and image processor >>> model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> tokenizer = GPT2TokenizerFast.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> image_processor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
109_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference
.md
>>> # let's perform inference on an image >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values
109_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference
.md
>>> # autoregressively generate caption (uses greedy decoding by default) >>> generated_ids = model.generate(pixel_values) >>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) a cat laying on a blanket next to a cat laying on a bed ```
109_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel
.md
[`TFVisionEncoderDecoderModel.from_pretrained`] currently doesn't support initializing the model from a PyTorch checkpoint. Passing `from_pt=True` to this method will throw an exception. If there are only PyTorch checkpoints for a particular vision encoder-decoder model, a workaround is: ```python >>> from transformers import VisionEncoderDecoderModel, TFVisionEncoderDecoderModel >>> _model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
109_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel
.md
>>> _model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> _model.encoder.save_pretrained("./encoder") >>> _model.decoder.save_pretrained("./decoder") >>> model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ... ) >>> # This is only for copying some specific attributes of this particular model. >>> model.config = _model.config ```
109_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#training
.md
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: `pixel_values` (which are the images) and `labels` (which are the `input_ids` of the encoded target sequence). ```python >>> from transformers import ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel >>> from datasets import load_dataset
109_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#training
.md
>>> image_processor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k") >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "google/vit-base-patch16-224-in21k", "google-bert/bert-base-uncased" ... ) >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id
109_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#training
.md
>>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values >>> labels = tokenizer( ... "an image of two cats chilling on a couch", ... return_tensors="pt", ... ).input_ids
109_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#training
.md
>>> labels = tokenizer( ... "an image of two cats chilling on a couch", ... return_tensors="pt", ... ).input_ids >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(pixel_values=pixel_values, labels=labels).loss ``` This model was contributed by [nielsr](https://github.com/nielsrogge). This model's TensorFlow and Flax versions were contributed by [ydshieh](https://github.com/ydshieh).
109_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecoderconfig
.md
[`VisionEncoderDecoderConfig`] is the configuration class to store the configuration of a [`VisionEncoderDecoderModel`]. It is used to instantiate a Vision-Encoder-Text-Decoder model according to the specified arguments, defining the encoder and decoder configs. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: kwargs (*optional*): Dictionary of keyword arguments. Notably:
109_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecoderconfig
.md
Args: kwargs (*optional*): Dictionary of keyword arguments. Notably: - **encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the encoder config. - **decoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the decoder config. Examples: ```python >>> from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel
109_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecoderconfig
.md
>>> # Initializing a ViT & BERT style configuration >>> config_encoder = ViTConfig() >>> config_decoder = BertConfig() >>> config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> # Initializing a ViTBert model (with random weights) from a ViT & google-bert/bert-base-uncased style configurations >>> model = VisionEncoderDecoderModel(config=config)
109_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecoderconfig
.md
>>> # Accessing the model configuration >>> config_encoder = model.config.encoder >>> config_decoder = model.config.decoder >>> # set decoder config to causal lm >>> config_decoder.is_decoder = True >>> config_decoder.add_cross_attention = True >>> # Saving the model, including its configuration >>> model.save_pretrained("my-model")
109_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecoderconfig
.md
>>> # Saving the model, including its configuration >>> model.save_pretrained("my-model") >>> # loading model and config from pretrained folder >>> encoder_decoder_config = VisionEncoderDecoderConfig.from_pretrained("my-model") >>> model = VisionEncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config) ``` <frameworkcontent> <pt>
109_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecodermodel
.md
This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [`~AutoModel.from_pretrained`] function and the decoder is loaded via [`~AutoModelForCausalLM.from_pretrained`] function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.
109_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecodermodel
.md
generative task, like image captioning. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. Additionally, in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained
109_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecodermodel
.md
Zhou, Wei Li, Peter J. Liu. Additionally, in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement. After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
109_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecodermodel
.md
other models (see the examples for more information). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
109_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecodermodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`VisionEncoderDecoderConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
109_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecodermodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. [`VisionEncoderDecoderModel`] is a generic model class that will be instantiated as a transformer architecture with one of the base vision model classes of the library as encoder and another one as decoder when created with the :meth*~transformers.AutoModel.from_pretrained* class method for the encoder and
109_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#visionencoderdecodermodel
.md
:meth*~transformers.AutoModel.from_pretrained* class method for the encoder and :meth*~transformers.AutoModelForCausalLM.from_pretrained* class method for the decoder. Methods: forward - from_encoder_decoder_pretrained </pt> <tf>
109_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#tfvisionencoderdecodermodel
.md
No docstring available for TFVisionEncoderDecoderModel Methods: call - from_encoder_decoder_pretrained </tf> <jax>
109_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-encoder-decoder/#flaxvisionencoderdecodermodel
.md
No docstring available for FlaxVisionEncoderDecoderModel Methods: __call__ - from_encoder_decoder_pretrained </jax> </frameworkcontent>
109_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#overview
.md
[C4AI Command R7B](https://cohere.com/blog/command-r7b) is an open weights research release of a 7B billion parameter model developed by Cohere and Cohere For AI. It has advanced capabilities optimized for various use cases, including reasoning, summarization, question answering, and code. The model is trained to perform sophisticated tasks including Retrieval Augmented Generation (RAG) and tool use. The model also has powerful agentic capabilities that can use and combine multiple tools over multiple steps
110_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#overview
.md
and tool use. The model also has powerful agentic capabilities that can use and combine multiple tools over multiple steps to accomplish more difficult tasks. It obtains top performance on enterprise-relevant code use cases. C4AI Command R7B is a multilingual model trained on 23 languages.
110_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#overview
.md
The model features three layers with sliding window attention (window size 4096) and ROPE for efficient local context modeling and relative positional encoding. A fourth layer uses global attention without positional embeddings, enabling unrestricted token interactions across the entire sequence.
110_0_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#overview
.md
The model has been trained on 23 languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian.
110_0_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#usage-tips
.md
The model and tokenizer can be loaded via: ```python # pip install transformers from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r7b-12-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id)
110_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#usage-tips
.md
# Format message with the command-r chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ```
110_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
This is the configuration class to store the configuration of a [`CohereModel`]. It is used to instantiate an Cohere model according to the specified arguments, defining the model architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Instantiating a configuration
110_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
documentation from [`PretrainedConfig`] for more information. Instantiating a configuration with the defaults will yield a similar configuration to that of the [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) model. Args: vocab_size (`int`, *optional*, defaults to 256000): Vocabulary size of the Cohere model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`CohereModel`]
110_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
`inputs_ids` passed when calling [`CohereModel`] hidden_size (`int`, *optional*, defaults to 8192): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 22528): Dimension of the MLP representations. logit_scale (`float`, *optional*, defaults to 0.0625): The scaling factor for the output logits. num_hidden_layers (`int`, *optional*, defaults to 40): Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 64):
110_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 64): Number of attention heads for each attention layer in the Transformer decoder. num_key_value_heads (`int`, *optional*): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
110_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`.
110_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 8192): The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02):
110_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only
110_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*, defaults to 0): Padding token id. bos_token_id (`int`, *optional*, defaults to 5): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 255001): End of stream token id. tie_word_embeddings (`bool`, *optional*, defaults to `True`): Whether to tie weight embeddings
110_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
End of stream token id. tie_word_embeddings (`bool`, *optional*, defaults to `True`): Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value accordingly. Expected contents:
110_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
accordingly. Expected contents: `rope_type` (`str`): The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope', 'llama3'], with 'default' being the original RoPE implementation. `factor` (`float`, *optional*): Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In most scaling types, a `factor` of x will enable the model to handle sequences of length x * original maximum pre-trained length.
110_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
original maximum pre-trained length. `original_max_position_embeddings` (`int`, *optional*): Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining. `attention_factor` (`float`, *optional*): Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention computation. If unspecified, it defaults to value recommended by the implementation, using the `factor` field to infer the suggested value. `beta_fast` (`float`, *optional*):
110_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
`factor` field to infer the suggested value. `beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear ramp function. If unspecified, it defaults to 32. `beta_slow` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*):
110_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (<
110_2_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
`long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `low_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE `high_freq_factor` (`float`, *optional*):
110_2_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
`high_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`): Whether to use a bias in the query, key, value and output projection layers during self-attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. sliding_window (`int`, *optional*, defaults to 4096):
110_2_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
The dropout ratio for the attention probabilities. sliding_window (`int`, *optional*, defaults to 4096): Size of the sliding window attention context. sliding_window_pattern (`int`, *optional*, defaults to 4): Pattern for the sliding window attention. cache_implementation (`str`, *optional*, defaults to `"hybrid"`): the cache type to be used with `generate`. ```python >>> from transformers import Cohere2Model, Cohere2Config
110_2_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2config
.md
>>> # Initializing a Cohere Nextmodel configuration >>> configuration = Cohere2Config() >>> # Initializing a model from the Cohere2 configuration >>> model = Cohere2Model(configuration) # doctest: +SKIP >>> # Accessing the model configuration >>> configuration = model.config # doctest: +SKIP ```
110_2_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2model
.md
The bare Cohere2 Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
110_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2model
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Cohere2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
110_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2model
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Cohere2DecoderLayer`] Args: config: Cohere2Config Methods: forward
110_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere2.md
https://huggingface.co/docs/transformers/en/model_doc/cohere2/#cohere2forcausallm
.md
No docstring available for Cohere2ForCausalLM Methods: forward
110_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
111_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
111_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#overview
.md
MatCha has been proposed in the paper [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662), from Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos. The abstract of the paper states the following:
111_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#overview
.md
*Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art vision-language models do not perform well on these data. We propose MatCha (Math reasoning and Chart derendering pretraining) to enhance visual language models' capabilities in jointly modeling charts/plots and language data. Specifically, we propose several pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language
111_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#overview
.md
pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language modeling. We perform the MatCha pretraining starting from Pix2Struct, a recently proposed image-to-text visual language model. On standard benchmarks such as PlotQA and ChartQA, the MatCha model outperforms state-of-the-art methods by as much as nearly 20%. We also examine how well MatCha pretraining transfers to domains such as screenshots, textbook diagrams, and document figures and
111_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#overview
.md
also examine how well MatCha pretraining transfers to domains such as screenshots, textbook diagrams, and document figures and observe overall improvement, verifying the usefulness of MatCha pretraining on broader visual language tasks.*
111_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#model-description
.md
MatCha is a model that is trained using `Pix2Struct` architecture. You can find more information about `Pix2Struct` in the [Pix2Struct documentation](https://huggingface.co/docs/transformers/main/en/model_doc/pix2struct). MatCha is a Visual Question Answering subset of `Pix2Struct` architecture. It renders the input question on the image and predicts the answer.
111_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#usage
.md
Currently 6 checkpoints are available for MatCha: - `google/matcha`: the base MatCha model, used to fine-tune MatCha on downstream tasks - `google/matcha-chartqa`: MatCha model fine-tuned on ChartQA dataset. It can be used to answer questions about charts. - `google/matcha-plotqa-v1`: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots. - `google/matcha-plotqa-v2`: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots.
111_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#usage
.md
- `google/matcha-plotqa-v2`: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots. - `google/matcha-chart2text-statista`: MatCha model fine-tuned on Statista dataset. - `google/matcha-chart2text-pew`: MatCha model fine-tuned on Pew dataset. The models finetuned on `chart2text-pew` and `chart2text-statista` are more suited for summarization, whereas the models finetuned on `plotqa` and `chartqa` are more suited for question answering.
111_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#usage
.md
You can use these models as follows (example on a ChatQA dataset): ```python from transformers import AutoProcessor, Pix2StructForConditionalGeneration import requests from PIL import Image
111_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#usage
.md
model = Pix2StructForConditionalGeneration.from_pretrained("google/matcha-chartqa").to(0) processor = AutoProcessor.from_pretrained("google/matcha-chartqa") url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png" image = Image.open(requests.get(url, stream=True).raw)
111_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#usage
.md
inputs = processor(images=image, text="Is the sum of all 4 places greater than Laos?", return_tensors="pt").to(0) predictions = model.generate(**inputs, max_new_tokens=512) print(processor.decode(predictions[0], skip_special_tokens=True)) ```
111_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#fine-tuning
.md
To fine-tune MatCha, refer to the pix2struct [fine-tuning notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb). For `Pix2Struct` models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faster convergence: ```python from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup
111_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/matcha.md
https://huggingface.co/docs/transformers/en/model_doc/matcha/#fine-tuning
.md
optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05) scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000) ``` <Tip> MatCha is a model that is trained using `Pix2Struct` architecture. You can find more information about `Pix2Struct` in the [Pix2Struct documentation](https://huggingface.co/docs/transformers/main/en/model_doc/pix2struct). </Tip>
111_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
112_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
112_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#overview
.md
The SwitchTransformers model was proposed in [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer.
112_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#overview
.md
The Switch Transformer model uses a sparse T5 encoder-decoder architecture, where the MLP are replaced by a Mixture of Experts (MoE). A routing mechanism (top 1 in this case) associates each token to one of the expert, where each expert is a dense MLP. While switch transformers have a lot more weights than their equivalent dense models, the sparsity allows better scaling and better finetuning performance at scale.
112_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#overview
.md
During a forward pass, only a fraction of the weights are used. The routing mechanism allows the model to select relevant weights on the fly which increases the model capacity without increasing the number of operations. The abstract from the paper is the following:
112_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#overview
.md
*In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We
112_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#overview
.md
been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in
112_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#overview
.md
time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the T5-XXL model.*
112_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#overview
.md
This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada) and [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/google/flaxformer/tree/main/flaxformer/architectures/moe).
112_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#usage-tips
.md
- SwitchTransformers uses the [`T5Tokenizer`], which can be loaded directly from each model's repository. - The released weights are pretrained on English [Masked Language Modeling](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323/en/glossary#general-terms) task, and should be finetuned.
112_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#resources
.md
- [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization)
112_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig
.md
This is the configuration class to store the configuration of a [`SwitchTransformersModel`]. It is used to instantiate a SwitchTransformers model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SwitchTransformers [google/switch-base-8](https://huggingface.co/google/switch-base-8) architecture.
112_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig
.md
SwitchTransformers [google/switch-base-8](https://huggingface.co/google/switch-base-8) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Arguments: vocab_size (`int`, *optional*, defaults to 32128): Vocabulary size of the SwitchTransformers model. Defines the number of different tokens that can be
112_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig
.md
Vocabulary size of the SwitchTransformers model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`SwitchTransformersModel`]. d_model (`int`, *optional*, defaults to 768): Size of the encoder layers and the pooler layer. d_kv (`int`, *optional*, defaults to 64): Size of the key, query, value projections per attention head. `d_kv` has to be equal to `d_model // num_heads`. d_ff (`int`, *optional*, defaults to 2048):
112_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md
https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig
.md
num_heads`. d_ff (`int`, *optional*, defaults to 2048): Size of the intermediate feed forward layer in each `SwitchTransformersBlock`. expert_capacity (`int`, *optional*, defaults to 64): Number of tokens that can be stored in each expert. If set to 1, the model will behave like a regular Transformer. num_layers (`int`, *optional*, defaults to 12): Number of dense hidden layers in the Transformer encoder layer. num_sparse_encoder_layers (`int`, *optional*, defaults to 3):
112_4_3