source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
126_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#overview
.md
The IDEFICS model was proposed in [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents ](https://huggingface.co/papers/2306.16527 ) by Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh The abstract from the paper is the following:
126_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#overview
.md
*Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks that require reasoning over one or multiple images to generate a text. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages
126_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#overview
.md
the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELISC, we train an 80 billion parameters vision and language model on the dataset and obtain competitive performance on various multimodal
126_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#overview
.md
an 80 billion parameters vision and language model on the dataset and obtain competitive performance on various multimodal benchmarks. We release the code to reproduce the dataset along with the dataset itself.*
126_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#overview
.md
This model was contributed by [HuggingFaceM4](https://huggingface.co/HuggingFaceM4). The original code can be found [here](<INSERT LINK TO GITHUB REPO HERE>). (TODO: don't have a public link yet). <Tip warning={true}> IDEFICS modeling code in Transformers is for finetuning and inferencing the pre-trained IDEFICS models. To train a new IDEFICS model from scratch use the m4 codebase (a link will be provided once it's made public) </Tip>
126_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
This is the configuration class to store the configuration of a [`IdeficsModel`]. It is used to instantiate an Idefics model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Idefics-9B. e.g. [HuggingFaceM4/idefics-9b](https://huggingface.co/HuggingFaceM4/idefics-9b) Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
126_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: additional_vocab_size (`int`, *optional*, defaults to 0): Additional vocabulary size of the model, typically for the special "<img>" token. Additional vocab tokens are always trainable whereas regular vocab tokens can be frozen or not. vocab_size (`int`, *optional*, defaults to 32000):
126_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
are always trainable whereas regular vocab tokens can be frozen or not. vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the Idefics model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`~IdeficsModel`] hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 11008): Dimension of the MLP representations.
126_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
intermediate_size (`int`, *optional*, defaults to 11008): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer encoder. dropout (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
126_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. alpha_initializer (`str`, *optional*, defaults to `"zeros"`): Initialization type for the alphas.
126_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
alpha_initializer (`str`, *optional*, defaults to `"zeros"`): Initialization type for the alphas. alphas_initializer_range (`float`, *optional*, defaults to 0.0): The standard deviation of the truncated_normal_initializer for initializing the alphas in the Gated Cross Attention. alpha_type (`str`, *optional*, defaults to `"float"`): Whether the gating alphas should be vectors or single floats. rms_norm_eps (`float`, *optional*, defaults to 1e-6): The epsilon used by the rms normalization layers.
126_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
rms_norm_eps (`float`, *optional*, defaults to 1e-6): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*, defaults to 0) Padding token id. bos_token_id (`int`, *optional*, defaults to 1) Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 2) End of stream token id.
126_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 2) End of stream token id. tie_word_embeddings(`bool`, *optional*, defaults to `False`): Whether to tie weight embeddings cross_layer_interval (`int`, *optional*, default to 1) Interval for cross attention (from text to image) layers. qk_layer_norms (`bool`, *optional*, defaults to `False`): Whether to add layer norm after q and k freeze_text_layers (`bool`, *optional*, defaults to `True`): Whether to freeze text layers
126_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
freeze_text_layers (`bool`, *optional*, defaults to `True`): Whether to freeze text layers freeze_text_module_exceptions (`bool`, *optional*, defaults to `[]`): Exceptions to freezing text layers when `freeze_text_layers` is `True` freeze_lm_head (`bool`, *optional*, defaults to `False`): Whether to freeze lm head freeze_vision_layers (`bool`, *optional*, defaults to `True`): Whether to freeze vision layers freeze_vision_module_exceptions (`bool`, *optional*, defaults to `[]`):
126_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
freeze_vision_module_exceptions (`bool`, *optional*, defaults to `[]`): Exceptions to freezing vision layers when `freeze_vision_layers` is `True` use_resampler (`bool`, *optional*, defaults to `False`): Whether to use the Resampler vision_config (`IdeficsVisionConfig`, *optional*): Custom vision config or dict perceiver_config (`IdeficsPerceiverConfig`, *optional*): Custom perceiver config or dict Example: ```python >>> from transformers import IdeficsModel, IdeficsConfig
126_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsconfig
.md
>>> # Initializing a Idefics idefics-9b style configuration >>> configuration = IdeficsConfig() >>> # Initializing a model from the idefics-9b style configuration >>> model = IdeficsModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
126_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsmodel
.md
The bare LLaMA Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
126_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`IdeficsConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
126_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of `config.num_hidden_layers` layers. Each layer is a [`IdeficsDecoderLayer`] Args: config: IdeficsConfig Methods: forward
126_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsforvisiontext2text
.md
No docstring available for IdeficsForVisionText2Text Methods: forward
126_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#tfideficsmodel
.md
No docstring available for TFIdeficsModel Methods: call
126_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#tfideficsforvisiontext2text
.md
No docstring available for TFIdeficsForVisionText2Text Methods: call
126_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsimageprocessor
.md
Constructs a Idefics image processor. Args: image_size (`int`, *optional*, defaults to 224): Resize to image size image_mean (`float` or `List[float]`, *optional*, defaults to `IDEFICS_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. Can be overridden by the `image_mean` parameter in the `preprocess` method.
126_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsimageprocessor
.md
overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IDEFICS_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Can be overridden by the `image_std` parameter in the `preprocess` method. image_num_channels (`int`, *optional*, defaults to 3):
126_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsimageprocessor
.md
image_num_channels (`int`, *optional*, defaults to 3): Number of image channels. Methods: preprocess
126_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsprocessor
.md
Constructs a IDEFICS processor which wraps a LLama tokenizer and IDEFICS image processor into a single processor. [`IdeficsProcessor`] offers all the functionalities of [`IdeficsImageProcessor`] and [`LlamaTokenizerFast`]. See the docstring of [`~IdeficsProcessor.__call__`] and [`~IdeficsProcessor.decode`] for more information. Args: image_processor (`IdeficsImageProcessor`): An instance of [`IdeficsImageProcessor`]. The image processor is a required input. tokenizer (`LlamaTokenizerFast`):
126_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/#ideficsprocessor
.md
An instance of [`IdeficsImageProcessor`]. The image processor is a required input. tokenizer (`LlamaTokenizerFast`): An instance of [`LlamaTokenizerFast`]. The tokenizer is a required input. image_size (`int`, *optional*, defaults to 224): Image size (assuming a square image) add_end_of_utterance_token (`str`, *optional*): The string representation of token representing end of utterance Methods: __call__
126_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
127_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
127_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#overview
.md
The ViLT model was proposed in [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. ViLT incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for Vision-and-Language Pre-training (VLP). The abstract from the paper is the following: *Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks.
127_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#overview
.md
*Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more
127_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#overview
.md
find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically
127_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#overview
.md
Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vilt_architecture.jpg" alt="drawing" width="600"/>
127_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#overview
.md
alt="drawing" width="600"/> <small> ViLT architecture. Taken from the <a href="https://arxiv.org/abs/2102.03334">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/dandelin/ViLT).
127_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#usage-tips
.md
- The quickest way to get started with ViLT is by checking the [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ViLT) (which showcase both inference and fine-tuning on custom data). - ViLT is a model that takes both `pixel_values` and `input_ids` as input. One can use [`ViltProcessor`] to prepare data for the model. This processor wraps a image processor (for the image modality) and a tokenizer (for the language modality) into one.
127_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#usage-tips
.md
This processor wraps a image processor (for the image modality) and a tokenizer (for the language modality) into one. - ViLT is trained with images of various sizes: the authors resize the shorter edge of input images to 384 and limit the longer edge to under 640 while preserving the aspect ratio. To make batching of images possible, the authors use a `pixel_mask` that indicates which pixel values are real and which are padding. [`ViltProcessor`] automatically creates this for you.
127_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#usage-tips
.md
which pixel values are real and which are padding. [`ViltProcessor`] automatically creates this for you. - The design of ViLT is very similar to that of a standard Vision Transformer (ViT). The only difference is that the model includes additional embedding layers for the language modality. - The PyTorch version of this model is only available in torch 1.10 and higher.
127_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
This is the configuration class to store the configuration of a [`ViLTModel`]. It is used to instantiate an ViLT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ViLT [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
127_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the text part of the model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`ViltModel`]. type_vocab_size (`int`, *optional*, defaults to 2):
127_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
represented by the `inputs_ids` passed when calling [`ViltModel`]. type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`ViltModel`]. This is used when encoding text. modality_type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the modalities passed when calling [`ViltModel`]. This is used after concatening the embeddings of the text and image modalities. max_position_embeddings (`int`, *optional*, defaults to 40):
127_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
embeddings of the text and image modalities. max_position_embeddings (`int`, *optional*, defaults to 40): The maximum sequence length that this model might ever be used with. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
127_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
127_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02):
127_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 384): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 32): The size (resolution) of each patch.
127_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 32): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. max_image_length (`int`, *optional*, defaults to -1): The maximum number of patches to take as input for the Transformer encoder. If set to a positive integer,
127_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
The maximum number of patches to take as input for the Transformer encoder. If set to a positive integer, the encoder will sample `max_image_length` patches at maximum. If set to -1, will not be taken into account. num_images (`int`, *optional*, defaults to -1): The number of images to use for natural language visual reasoning. If set to a positive integer, will be used by [`ViltForImagesAndTextClassification`] for defining the classifier head. Example: ```python
127_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
used by [`ViltForImagesAndTextClassification`] for defining the classifier head. Example: ```python >>> from transformers import ViLTModel, ViLTConfig
127_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltconfig
.md
>>> # Initializing a ViLT dandelin/vilt-b32-mlm style configuration >>> configuration = ViLTConfig() >>> # Initializing a model from the dandelin/vilt-b32-mlm style configuration >>> model = ViLTModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
127_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltfeatureextractor
.md
No docstring available for ViltFeatureExtractor Methods: __call__
127_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltimageprocessor
.md
Constructs a ViLT image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 384}`): Resize the shorter side of the input to `size["shortest_edge"]`. The longer side will be limited to under
127_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltimageprocessor
.md
Resize the shorter side of the input to `size["shortest_edge"]`. The longer side will be limited to under `int((1333 / 800) * size["shortest_edge"])` while preserving the aspect ratio. Only has an effect if `do_resize` is set to `True`. Can be overridden by the `size` parameter in the `preprocess` method. size_divisor (`int`, *optional*, defaults to 32): The size by which to make sure both the height and width can be divided. Only has an effect if `do_resize`
127_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltimageprocessor
.md
The size by which to make sure both the height and width can be divided. Only has an effect if `do_resize` is set to `True`. Can be overridden by the `size_divisor` parameter in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`): Resampling filter to use if resizing the image. Only has an effect if `do_resize` is set to `True`. Can be overridden by the `resample` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
127_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltimageprocessor
.md
overridden by the `resample` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Wwhether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Only has an effect if `do_rescale` is set to `True`. Can be overridden by the `rescale_factor` parameter in the `preprocess` method.
127_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltimageprocessor
.md
overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of
127_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltimageprocessor
.md
Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
127_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltimageprocessor
.md
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Can be overridden by the `image_std` parameter in the `preprocess` method. do_pad (`bool`, *optional*, defaults to `True`): Whether to pad the image to the `(max_height, max_width)` of the images in the batch. Can be overridden by the `do_pad` parameter in the `preprocess` method.
127_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltimageprocessor
.md
the `do_pad` parameter in the `preprocess` method. Methods: preprocess
127_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltprocessor
.md
Constructs a ViLT processor which wraps a BERT tokenizer and ViLT image processor into a single processor. [`ViltProcessor`] offers all the functionalities of [`ViltImageProcessor`] and [`BertTokenizerFast`]. See the docstring of [`~ViltProcessor.__call__`] and [`~ViltProcessor.decode`] for more information. Args: image_processor (`ViltImageProcessor`, *optional*): An instance of [`ViltImageProcessor`]. The image processor is a required input. tokenizer (`BertTokenizerFast`, *optional*):
127_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltprocessor
.md
An instance of [`ViltImageProcessor`]. The image processor is a required input. tokenizer (`BertTokenizerFast`, *optional*): An instance of ['BertTokenizerFast`]. The tokenizer is a required input. Methods: __call__
127_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltmodel
.md
The bare ViLT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViltConfig`]): Model configuration class with all the parameters of the model.
127_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltmodel
.md
behavior. Parameters: config ([`ViltConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
127_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltformaskedlm
.md
ViLT Model with a language modeling head on top as done during pretraining. This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViltConfig`]): Model configuration class with all the parameters of the model.
127_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltformaskedlm
.md
behavior. Parameters: config ([`ViltConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
127_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforquestionanswering
.md
Vilt Model transformer with a classifier head on top (a linear layer on top of the final hidden state of the [CLS] token) for visual question answering, e.g. for VQAv2. This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViltConfig`]): Model configuration class with all the parameters of the model.
127_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforquestionanswering
.md
behavior. Parameters: config ([`ViltConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
127_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforimagesandtextclassification
.md
Vilt Model transformer with a classifier head on top for natural language visual reasoning, e.g. NLVR2. Args: input_ids (`torch.LongTensor` of shape `({0})`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*):
127_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforimagesandtextclassification
.md
IDs?](../glossary#input-ids) attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*): Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
127_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforimagesandtextclassification
.md
Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a *sentence A* token, - 1 corresponds to a *sentence B* token. [What are token type IDs?](../glossary#token-type-ids) pixel_values (`torch.FloatTensor` of shape `(batch_size, num_images, num_channels, height, width)`): Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See [`ViltImageProcessor.__call__`] for details.
127_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforimagesandtextclassification
.md
Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See [`ViltImageProcessor.__call__`] for details. pixel_mask (`torch.LongTensor` of shape `(batch_size, num_images, height, width)`, *optional*): Mask to avoid performing attention on padding pixel values. Mask values selected in `[0, 1]`: - 1 for pixels that are real (i.e. **not masked**), - 0 for pixels that are padding (i.e. **masked**). `What are attention masks? <../glossary.html#attention-mask>`__
127_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforimagesandtextclassification
.md
- 0 for pixels that are padding (i.e. **masked**). `What are attention masks? <../glossary.html#attention-mask>`__ head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*):
127_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforimagesandtextclassification
.md
- 0 indicates the head is **masked**. inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. image_embeds (`torch.FloatTensor` of shape `(batch_size, num_images, num_patches, hidden_size)`, *optional*):
127_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforimagesandtextclassification
.md
image_embeds (`torch.FloatTensor` of shape `(batch_size, num_images, num_patches, hidden_size)`, *optional*): Optionally, instead of passing `pixel_values`, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `pixel_values` into patch embeddings. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail.
127_10_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforimagesandtextclassification
.md
tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. Methods: forward
127_10_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforimageandtextretrieval
.md
Vilt Model transformer with a classifier head on top (a linear layer on top of the final hidden state of the [CLS] token) for image-to-text or text-to-image retrieval, e.g. MSCOCO and F30K. This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
127_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltforimageandtextretrieval
.md
behavior. Parameters: config ([`ViltConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
127_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltfortokenclassification
.md
ViLT Model with a token classification head on top (a linear layer on top of the final hidden-states of the text tokens) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViltConfig`]): Model configuration class with all the parameters of the model.
127_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vilt.md
https://huggingface.co/docs/transformers/en/model_doc/vilt/#viltfortokenclassification
.md
behavior. Parameters: config ([`ViltConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
127_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
128_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
128_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#overview
.md
The MPT model was proposed by the [MosaicML](https://www.mosaicml.com/) team and released with multiple sizes and finetuned variants. The MPT models are a series of open source and commercially usable LLMs pre-trained on 1T tokens. MPT models are GPT-style decoder-only transformers with several improvements: performance-optimized layer implementations, architecture changes that provide greater training stability, and the elimination of context length limits by replacing positional embeddings with ALiBi.
128_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#overview
.md
- MPT base: MPT base pre-trained models on next token prediction - MPT instruct: MPT base models fine-tuned on instruction based tasks - MPT storywriter: MPT base models fine-tuned for 2500 steps on 65k-token excerpts of fiction books contained in the books3 corpus, this enables the model to handle very long sequences The original code is available at the [`llm-foundry`](https://github.com/mosaicml/llm-foundry/tree/main) repository.
128_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#overview
.md
The original code is available at the [`llm-foundry`](https://github.com/mosaicml/llm-foundry/tree/main) repository. Read more about it [in the release blogpost](https://www.mosaicml.com/blog/mpt-7b)
128_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#usage-tips
.md
- Learn more about some techniques behind training of the model [in this section of llm-foundry repository](https://github.com/mosaicml/llm-foundry/blob/main/TUTORIAL.md#faqs) - If you want to use the advanced version of the model (triton kernels, direct flash attention integration), you can still use the original model implementation by adding `trust_remote_code=True` when calling `from_pretrained`.
128_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#resources
.md
- [Fine-tuning Notebook](https://colab.research.google.com/drive/1HCpQkLL7UXW8xJUJJ29X7QAeNJKO0frZ?usp=sharing) on how to fine-tune MPT-7B on a free Google Colab instance to turn the model into a Chatbot.
128_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptconfig
.md
This is the configuration class to store the configuration of a [`MptModel`]. It is used to instantiate a Mpt model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to the Mpt-7b architecture [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
128_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: d_model (`int`, *optional*, defaults to 2048): Dimensionality of the embeddings and hidden states. n_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. n_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder.
128_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptconfig
.md
n_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. expansion_ratio (`int`, *optional*, defaults to 4): The ratio of the up/down scale in the MLP. max_seq_len (`int`, *optional*, defaults to 2048): The maximum sequence length of the model. vocab_size (`int`, *optional*, defaults to 50368): Vocabulary size of the Mpt model. Defines the maximum number of different tokens that can be represented by the `inputs_ids` passed when calling [`MptModel`]. Check [this
128_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptconfig
.md
the `inputs_ids` passed when calling [`MptModel`]. Check [this discussion](https://huggingface.co/bigscience/mpt/discussions/120#633d28389addb8530b406c2a) on how the `vocab_size` has been defined. resid_pdrop (`float`, *optional*, defaults to 0.0): The dropout probability applied to the attention output before combining with residual. layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): The epsilon to use in the layer normalization layers. emb_pdrop (`float`, *optional*, defaults to 0.0):
128_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptconfig
.md
The epsilon to use in the layer normalization layers. emb_pdrop (`float`, *optional*, defaults to 0.0): The dropout probability for the embedding layer. learned_pos_emb (`bool`, *optional*, defaults to `True`): Whether to use learned positional embeddings. attn_config (`dict`, *optional*): A dictionary used to configure the model's attention module. init_device (`str`, *optional*, defaults to `"cpu"`): The device to use for parameter initialization. Defined for backward compatibility
128_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptconfig
.md
The device to use for parameter initialization. Defined for backward compatibility logit_scale (`float`, *optional*): If not None, scale the logits by this value. no_bias (`bool`, *optional*, defaults to `True`): Whether to use bias in all linear layers. verbose (`int`, *optional*, defaults to 0): The verbosity level to use for logging. Used in the previous versions of MPT models for logging. This argument is deprecated. embedding_fraction (`float`, *optional*, defaults to 1.0):
128_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptconfig
.md
argument is deprecated. embedding_fraction (`float`, *optional*, defaults to 1.0): The fraction to scale the gradients of the embedding layer by. norm_type (`str`, *optional*, defaults to `"low_precision_layernorm"`): Type of layer norm to use. All MPT models uses the same layer norm implementation. Defined for backward compatibility. use_cache (`bool`, *optional*, defaults to `False`): Whether or not the model should return the last key/values attentions (not used by all models).
128_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptconfig
.md
Whether or not the model should return the last key/values attentions (not used by all models). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Example: ```python >>> from transformers import MptConfig, MptModel
128_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptconfig
.md
>>> # Initializing a Mpt configuration >>> configuration = MptConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = MptModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` Methods: all
128_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptmodel
.md
The bare Mpt Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
128_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptmodel
.md
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MptConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
128_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
128_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptforcausallm
.md
The MPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
128_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptforcausallm
.md
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MptConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
128_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md
https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptforcausallm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
128_6_2