source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusmodel
.md
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BigBirdPegasusConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
373_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
373_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusforconditionalgeneration
.md
The BigBirdPegasus Model with a language modeling head. Can be used for summarization. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
373_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusforconditionalgeneration
.md
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BigBirdPegasusConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
373_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusforconditionalgeneration
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
373_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusforsequenceclassification
.md
BigBirdPegasus model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
373_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusforsequenceclassification
.md
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BigBirdPegasusConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
373_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusforsequenceclassification
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
373_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusforquestionanswering
.md
BigBirdPegasus Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
373_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BigBirdPegasusConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
373_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusforquestionanswering
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
373_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md
https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusforcausallm
.md
No docstring available for BigBirdPegasusForCausalLM Methods: forward
373_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
374_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
374_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#overview
.md
The EfficientNet model was proposed in [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models. The abstract from the paper is the following:
374_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#overview
.md
*Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We
374_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#overview
.md
that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.
374_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#overview
.md
To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%),
374_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#overview
.md
the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters.*
374_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#overview
.md
This model was contributed by [adirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet).
374_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetconfig
.md
This is the configuration class to store the configuration of a [`EfficientNetModel`]. It is used to instantiate an EfficientNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the EfficientNet [google/efficientnet-b7](https://huggingface.co/google/efficientnet-b7) architecture.
374_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetconfig
.md
[google/efficientnet-b7](https://huggingface.co/google/efficientnet-b7) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_channels (`int`, *optional*, defaults to 3): The number of input channels. image_size (`int`, *optional*, defaults to 600): The input image size. width_coefficient (`float`, *optional*, defaults to 2.0):
374_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetconfig
.md
The input image size. width_coefficient (`float`, *optional*, defaults to 2.0): Scaling coefficient for network width at each stage. depth_coefficient (`float`, *optional*, defaults to 3.1): Scaling coefficient for network depth at each stage. depth_divisor `int`, *optional*, defaults to 8): A unit of network width. kernel_sizes (`List[int]`, *optional*, defaults to `[3, 3, 5, 3, 5, 5, 3]`): List of kernel sizes to be used in each block.
374_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetconfig
.md
kernel_sizes (`List[int]`, *optional*, defaults to `[3, 3, 5, 3, 5, 5, 3]`): List of kernel sizes to be used in each block. in_channels (`List[int]`, *optional*, defaults to `[32, 16, 24, 40, 80, 112, 192]`): List of input channel sizes to be used in each block for convolutional layers. out_channels (`List[int]`, *optional*, defaults to `[16, 24, 40, 80, 112, 192, 320]`): List of output channel sizes to be used in each block for convolutional layers.
374_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetconfig
.md
List of output channel sizes to be used in each block for convolutional layers. depthwise_padding (`List[int]`, *optional*, defaults to `[]`): List of block indices with square padding. strides (`List[int]`, *optional*, defaults to `[1, 2, 2, 2, 1, 2, 1]`): List of stride sizes to be used in each block for convolutional layers. num_block_repeats (`List[int]`, *optional*, defaults to `[1, 2, 2, 3, 3, 4, 1]`): List of the number of times each block is to repeated.
374_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetconfig
.md
List of the number of times each block is to repeated. expand_ratios (`List[int]`, *optional*, defaults to `[1, 6, 6, 6, 6, 6, 6]`): List of scaling coefficient of each block. squeeze_expansion_ratio (`float`, *optional*, defaults to 0.25): Squeeze expansion ratio. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in each block. If string, `"gelu"`, `"relu"`, `"selu", `"gelu_new"`, `"silu"` and `"mish"` are supported.
374_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetconfig
.md
`"selu", `"gelu_new"`, `"silu"` and `"mish"` are supported. hiddem_dim (`int`, *optional*, defaults to 1280): The hidden dimension of the layer before the classification head. pooling_type (`str` or `function`, *optional*, defaults to `"mean"`): Type of final pooling to be applied before the dense classification head. Available options are [`"mean"`, `"max"`] initializer_range (`float`, *optional*, defaults to 0.02):
374_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetconfig
.md
`"max"`] initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. batch_norm_eps (`float`, *optional*, defaults to 1e-3): The epsilon used by the batch normalization layers. batch_norm_momentum (`float`, *optional*, defaults to 0.99): The momentum used by the batch normalization layers. dropout_rate (`float`, *optional*, defaults to 0.5): The dropout rate to be applied before final classifier layer.
374_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetconfig
.md
dropout_rate (`float`, *optional*, defaults to 0.5): The dropout rate to be applied before final classifier layer. drop_connect_rate (`float`, *optional*, defaults to 0.2): The drop rate for skip connections. Example: ```python >>> from transformers import EfficientNetConfig, EfficientNetModel
374_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetconfig
.md
>>> # Initializing a EfficientNet efficientnet-b7 style configuration >>> configuration = EfficientNetConfig() >>> # Initializing a model (with random weights) from the efficientnet-b7 style configuration >>> model = EfficientNetModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
374_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetimageprocessor
.md
Constructs a EfficientNet image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by `do_resize` in `preprocess`. size (`Dict[str, int]` *optional*, defaults to `{"height": 346, "width": 346}`): Size of the image after `resize`. Can be overridden by `size` in `preprocess`. resample (`PILImageResampling` filter, *optional*, defaults to 0):
374_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetimageprocessor
.md
resample (`PILImageResampling` filter, *optional*, defaults to 0): Resampling filter to use if resizing the image. Can be overridden by `resample` in `preprocess`. do_center_crop (`bool`, *optional*, defaults to `False`): Whether to center crop the image. If the input size is smaller than `crop_size` along any edge, the image is padded with 0's and then center cropped. Can be overridden by `do_center_crop` in `preprocess`.
374_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetimageprocessor
.md
is padded with 0's and then center cropped. Can be overridden by `do_center_crop` in `preprocess`. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 289, "width": 289}`): Desired output size when applying center-cropping. Can be overridden by `crop_size` in `preprocess`. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method.
374_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetimageprocessor
.md
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. rescale_offset (`bool`, *optional*, defaults to `False`): Whether to rescale the image between [-scale_range, scale_range] instead of [0, scale_range]. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
374_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetimageprocessor
.md
overridden by the `rescale_factor` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method.
374_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetimageprocessor
.md
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
374_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetimageprocessor
.md
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. include_top (`bool`, *optional*, defaults to `True`): Whether to rescale the image again. Should be set to True if the inputs are used for image classification. Methods: preprocess
374_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetmodel
.md
The bare EfficientNet model outputting raw features without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`EfficientNetConfig`]): Model configuration class with all the parameters of the model.
374_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetmodel
.md
behavior. Parameters: config ([`EfficientNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
374_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetforimageclassification
.md
EfficientNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`EfficientNetConfig`]): Model configuration class with all the parameters of the model.
374_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientnet.md
https://huggingface.co/docs/transformers/en/model_doc/efficientnet/#efficientnetforimageclassification
.md
behavior. Parameters: config ([`EfficientNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
374_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-ul2.md
https://huggingface.co/docs/transformers/en/model_doc/flan-ul2/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
375_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-ul2.md
https://huggingface.co/docs/transformers/en/model_doc/flan-ul2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
375_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-ul2.md
https://huggingface.co/docs/transformers/en/model_doc/flan-ul2/#overview
.md
Flan-UL2 is an encoder decoder model based on the T5 architecture. It uses the same configuration as the [UL2](ul2) model released earlier last year. It was fine tuned using the "Flan" prompt tuning and dataset collection. Similar to `Flan-T5`, one can directly use FLAN-UL2 weights without finetuning the model: According to the original blog here are the notable improvements:
375_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-ul2.md
https://huggingface.co/docs/transformers/en/model_doc/flan-ul2/#overview
.md
According to the original blog here are the notable improvements: - The original UL2 model was only trained with receptive field of 512, which made it non-ideal for N-shot prompting where N is large. - The Flan-UL2 checkpoint uses a receptive field of 2048 which makes it more usable for few-shot in-context learning.
375_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-ul2.md
https://huggingface.co/docs/transformers/en/model_doc/flan-ul2/#overview
.md
- The original UL2 model also had mode switch tokens that was rather mandatory to get good performance. However, they were a little cumbersome as this requires often some changes during inference or finetuning. In this update/change, we continue training UL2 20B for an additional 100k steps (with small batch) to forget “mode tokens” before applying Flan instruction tuning. This Flan-UL2 checkpoint does not require mode tokens anymore. Google has released the following variants:
375_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-ul2.md
https://huggingface.co/docs/transformers/en/model_doc/flan-ul2/#overview
.md
Google has released the following variants: The original checkpoints can be found [here](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints).
375_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-ul2.md
https://huggingface.co/docs/transformers/en/model_doc/flan-ul2/#running-on-low-resource-devices
.md
The model is pretty heavy (~40GB in half precision) so if you just want to run the model, make sure you load your model in 8bit, and use `device_map="auto"` to make sure you don't have any OOM issue! ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-ul2", load_in_8bit=True, device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("google/flan-ul2")
375_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-ul2.md
https://huggingface.co/docs/transformers/en/model_doc/flan-ul2/#running-on-low-resource-devices
.md
>>> inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['In a large skillet, brown the ground beef and onion over medium heat. Add the garlic'] ``` <Tip> Refer to [T5's documentation page](t5) for API reference, tips, code examples and notebooks. </Tip>
375_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
376_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/
.md
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. specific language governing permissions and limitations under the License. -->
376_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#overview
.md
The Nougat model was proposed in [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418) by Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic. Nougat uses the same architecture as [Donut](donut), meaning an image Transformer encoder and an autoregressive text Transformer decoder to translate scientific PDFs to markdown, enabling easier access to them. The abstract from the paper is the following:
376_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#overview
.md
*Scientific knowledge is predominantly stored in books and scientific journals, often in the form of PDFs. However, the PDF format leads to a loss of semantic information, particularly for mathematical expressions. We propose Nougat (Neural Optical Understanding for Academic Documents), a Visual Transformer model that performs an Optical Character Recognition (OCR) task for processing scientific documents into a markup language, and demonstrate the effectiveness of our model on a new dataset of scientific
376_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#overview
.md
scientific documents into a markup language, and demonstrate the effectiveness of our model on a new dataset of scientific documents. The proposed approach offers a promising solution to enhance the accessibility of scientific knowledge in the digital age, by bridging the gap between human-readable documents and machine-readable text. We release the models and code to accelerate future work on scientific text recognition.*
376_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/nougat_architecture.jpg" alt="drawing" width="600"/> <small> Nougat high-level overview. Taken from the <a href="https://arxiv.org/abs/2308.13418">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/nougat).
376_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#usage-tips
.md
- The quickest way to get started with Nougat is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Nougat), which show how to use the model at inference time as well as fine-tuning on custom data. - Nougat is always used within the [VisionEncoderDecoder](vision-encoder-decoder) framework. The model is identical to [Donut](donut) in terms of architecture.
376_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#inference
.md
Nougat's [`VisionEncoderDecoder`] model accepts images as input and makes use of [`~generation.GenerationMixin.generate`] to autoregressively generate text given the input image. The [`NougatImageProcessor`] class is responsible for preprocessing the input image and [`NougatTokenizerFast`] decodes the generated target tokens to the target string. The [`NougatProcessor`] wraps [`NougatImageProcessor`] and [`NougatTokenizerFast`] classes
376_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#inference
.md
[`NougatProcessor`] wraps [`NougatImageProcessor`] and [`NougatTokenizerFast`] classes into a single instance to both extract the input features and decode the predicted token ids. - Step-by-step PDF transcription ```py >>> from huggingface_hub import hf_hub_download >>> import re >>> from PIL import Image
376_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#inference
.md
>>> from transformers import NougatProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = NougatProcessor.from_pretrained("facebook/nougat-base") >>> model = VisionEncoderDecoderModel.from_pretrained("facebook/nougat-base") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT
376_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#inference
.md
>>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # prepare PDF image for the model >>> filepath = hf_hub_download(repo_id="hf-internal-testing/fixtures_docvqa", filename="nougat_paper.png", repo_type="dataset") >>> image = Image.open(filepath) >>> pixel_values = processor(image, return_tensors="pt").pixel_values
376_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#inference
.md
>>> # generate transcription (here we only generate 30 tokens) >>> outputs = model.generate( ... pixel_values.to(device), ... min_length=1, ... max_new_tokens=30, ... bad_words_ids=[[processor.tokenizer.unk_token_id]], ... )
376_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#inference
.md
>>> sequence = processor.batch_decode(outputs, skip_special_tokens=True)[0] >>> sequence = processor.post_process_generation(sequence, fix_markdown=False) >>> # note: we're using repr here such for the sake of printing the \n characters, feel free to just print the sequence >>> print(repr(sequence)) '\n\n# Nougat: Neural Optical Understanding for Academic Documents\n\n Lukas Blecher\n\nCorrespondence to: lblecher@' ```
376_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#inference
.md
'\n\n# Nougat: Neural Optical Understanding for Academic Documents\n\n Lukas Blecher\n\nCorrespondence to: lblecher@' ``` See the [model hub](https://huggingface.co/models?filter=nougat) to look for Nougat checkpoints. <Tip> The model is identical to [Donut](donut) in terms of architecture. </Tip>
376_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougatimageprocessor
.md
Constructs a Nougat image processor. Args: do_crop_margin (`bool`, *optional*, defaults to `True`): Whether to crop the image margins. do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"height": 896, "width": 672}`): Size of the image after resizing. Can be overridden by `size` in the `preprocess` method.
376_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougatimageprocessor
.md
Size of the image after resizing. Can be overridden by `size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method. do_thumbnail (`bool`, *optional*, defaults to `True`): Whether to resize the image using thumbnail method. do_align_long_axis (`bool`, *optional*, defaults to `False`):
376_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougatimageprocessor
.md
Whether to resize the image using thumbnail method. do_align_long_axis (`bool`, *optional*, defaults to `False`): Whether to align the long axis of the image with the long axis of `size` by rotating by 90 degrees. do_pad (`bool`, *optional*, defaults to `True`): Whether to pad the images to the largest image size in the batch. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale`
376_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougatimageprocessor
.md
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method.
376_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougatimageprocessor
.md
Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`): Image standard deviation.
376_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougatimageprocessor
.md
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`): Image standard deviation. Methods: preprocess
376_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
Fast tokenizer for Nougat (backed by HuggingFace tokenizers library). This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. This class mainly adds Nougat-specific methods for postprocessing the generated text. Args: vocab_file (`str`, *optional*): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .model extension) that
376_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a .model extension) that contains the vocabulary necessary to instantiate a tokenizer. tokenizer_file (`str`, *optional*): [tokenizers](https://github.com/huggingface/tokenizers) file (generally has a .json extension) that contains everything needed to load the tokenizer. clean_up_tokenization_spaces (`str`, *optional*, defaults to `False`):
376_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
contains everything needed to load the tokenizer. clean_up_tokenization_spaces (`str`, *optional*, defaults to `False`): Wether to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `"<s>"`):
376_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
token instead. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. Class attributes (overridden by derived classes)
376_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
Class attributes (overridden by derived classes) - **vocab_files_names** (`Dict[str, str]`) -- A dictionary with, as keys, the `__init__` keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string). - **pretrained_vocab_files_map** (`Dict[str, Dict[str, str]]`) -- A dictionary of dictionaries, with the high-level keys being the `__init__` keyword name of each vocabulary file required by the model, the
376_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
high-level keys being the `__init__` keyword name of each vocabulary file required by the model, the low-level being the `short-cut-names` of the pretrained models with, as associated values, the `url` to the associated pretrained vocabulary file. - **model_input_names** (`List[str]`) -- A list of inputs expected in the forward pass of the model. - **padding_side** (`str`) -- The default value for the side on which the model should have padding applied. Should be `'right'` or `'left'`.
376_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
Should be `'right'` or `'left'`. - **truncation_side** (`str`) -- The default value for the side on which the model should have truncation applied. Should be `'right'` or `'left'`. Args: model_max_length (`int`, *optional*): The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with [`~tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained`], this will be set to the
376_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
loaded with [`~tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained`], this will be set to the value stored for the associated model in `max_model_input_sizes` (see above). If no value is provided, will default to VERY_LARGE_INTEGER (`int(1e30)`). padding_side (`str`, *optional*): The side on which the model should have padding applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. truncation_side (`str`, *optional*):
376_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
Default value is picked from the class attribute of the same name. truncation_side (`str`, *optional*): The side on which the model should have truncation applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. chat_template (`str`, *optional*): A Jinja template string that will be used to format lists of chat messages. See https://huggingface.co/docs/transformers/chat_templating for a full description.
376_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
https://huggingface.co/docs/transformers/chat_templating for a full description. model_input_names (`List[string]`, *optional*): The list of inputs accepted by the forward pass of the model (like `"token_type_ids"` or `"attention_mask"`). Default value is picked from the class attribute of the same name. bos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the beginning of a sentence. Will be associated to `self.bos_token` and `self.bos_token_id`.
376_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
A special token representing the beginning of a sentence. Will be associated to `self.bos_token` and `self.bos_token_id`. eos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the end of a sentence. Will be associated to `self.eos_token` and `self.eos_token_id`. unk_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing an out-of-vocabulary token. Will be associated to `self.unk_token` and `self.unk_token_id`.
376_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
A special token representing an out-of-vocabulary token. Will be associated to `self.unk_token` and `self.unk_token_id`. sep_token (`str` or `tokenizers.AddedToken`, *optional*): A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to `self.sep_token` and `self.sep_token_id`. pad_token (`str` or `tokenizers.AddedToken`, *optional*): A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
376_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to `self.pad_token` and `self.pad_token_id`. cls_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the class of the input (used by BERT for instance). Will be associated to `self.cls_token` and `self.cls_token_id`. mask_token (`str` or `tokenizers.AddedToken`, *optional*):
376_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
`self.cls_token` and `self.cls_token_id`. mask_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to `self.mask_token` and `self.mask_token_id`. additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*): A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with
376_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with `skip_special_tokens` is set to True. If they are not part of the vocabulary, they will be added at the end of the vocabulary. clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`): Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process. split_special_tokens (`bool`, *optional*, defaults to `False`):
376_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
tokenization process. split_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the special tokens should be split during the tokenization process. Passing will affect the internal state of the tokenizer. The default behavior is to not split special tokens. This means that if `<s>` is the `bos_token`, then `tokenizer.tokenize("<s>") = ['<s>`]. Otherwise, if `split_special_tokens=True`, then `tokenizer.tokenize("<s>")` will be give `['<','s', '>']`.
376_5_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougattokenizerfast
.md
`split_special_tokens=True`, then `tokenizer.tokenize("<s>")` will be give `['<','s', '>']`. tokenizer_object ([`tokenizers.Tokenizer`]): A [`tokenizers.Tokenizer`] object from 🤗 tokenizers to instantiate from. See [Using tokenizers from 🤗 tokenizers](../fast_tokenizers) for more information. tokenizer_file ([`str`]): A path to a local JSON file representing a previously serialized [`tokenizers.Tokenizer`] object from 🤗 tokenizers.
376_5_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougatprocessor
.md
Constructs a Nougat processor which wraps a Nougat image processor and a Nougat tokenizer into a single processor. [`NougatProcessor`] offers all the functionalities of [`NougatImageProcessor`] and [`NougatTokenizerFast`]. See the [`~NougatProcessor.__call__`] and [`~NougatProcessor.decode`] for more information. Args: image_processor ([`NougatImageProcessor`]): An instance of [`NougatImageProcessor`]. The image processor is a required input. tokenizer ([`NougatTokenizerFast`]):
376_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nougat.md
https://huggingface.co/docs/transformers/en/model_doc/nougat/#nougatprocessor
.md
An instance of [`NougatImageProcessor`]. The image processor is a required input. tokenizer ([`NougatTokenizerFast`]): An instance of [`NougatTokenizerFast`]. The tokenizer is a required input. Methods: __call__ - from_pretrained - save_pretrained - batch_decode - decode - post_process_generation
376_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
377_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
377_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/#overview
.md
LLaVa is an open-source chatbot trained by fine-tuning LlamA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. In other words, it is an multi-modal version of LLMs fine-tuned for chat / instructions.
377_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/#overview
.md
The LLaVa model was proposed in [Visual Instruction Tuning](https://arxiv.org/abs/2304.08485) and improved in [Improved Baselines with Visual Instruction Tuning](https://arxiv.org/pdf/2310.03744) by Haotian Liu, Chunyuan Li, Yuheng Li and Yong Jae Lee. The abstract from the paper is the following:
377_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/#overview
.md
*Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this note, we show that the fully-connected vision-language cross-modal connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA, namely, using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with simple response formatting prompts, we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final
377_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/#overview
.md
response formatting prompts, we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint uses merely 1.2M publicly available data, and finishes full training in ∼1 day on a single 8-A100 node. We hope this can make state-of-the-art LMM research more accessible. Code and model will be publicly available*
377_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/llava_architecture.jpg" alt="drawing" width="600"/> <small> LLaVa architecture. Taken from the <a href="https://arxiv.org/abs/2304.08485">original paper.</a> </small> This model was contributed by [ArthurZ](https://huggingface.co/ArthurZ) and [ybelkada](https://huggingface.co/ybelkada). The original code can be found [here](https://github.com/haotian-liu/LLaVA/tree/main/llava).
377_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/#usage-tips
.md
- We advise users to use `padding_side="left"` when computing batched generation as it leads to more accurate results. Simply make sure to call `processor.tokenizer.padding_side = "left"` before generating. - Note the model has not been explicitly trained to process multiple images in the same prompt, although this is technically possible, you may experience inaccurate results. > [!NOTE]
377_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/#usage-tips
.md
> [!NOTE] > LLaVA models after release v4.46 will raise warnings about adding `processor.patch_size = {{patch_size}}`, `processor.num_additional_image_tokens = {{num_additional_image_tokens}}` and processor.vision_feature_select_strategy = {{vision_feature_select_strategy}}`. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you.
377_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/#usage-tips
.md
Adding these attributes means that LLaVA will try to infer the number of image tokens required per image and expand the text with as many `<image>` placeholders as there will be tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there will be failure when merging the embeddings.
377_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/#usage-tips
.md
The attributes can be obtained from model config, as `model.config.vision_config.patch_size` or `model.config.vision_feature_select_strategy`. The `num_additional_image_tokens` should be `1` if the vision backbone adds a CLS token or `0` if nothing extra is added to the vision patches.
377_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md
https://huggingface.co/docs/transformers/en/model_doc/llava/#single-image-inference
.md
For best results, we recommend users to use the processor's `apply_chat_template()` method to format your prompt correctly. For that you need to construct a conversation history, passing in a plain string will not format your prompt. Each message in the conversation history for chat templates is a dictionary with keys "role" and "content". The "content" should be a list of dictionaries, for "text" and "image" modalities, as follows: ```python from transformers import AutoProcessor
377_3_0