source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosimageprocessor
.md
provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest height and width in the batch. Methods: preprocess - pad - post_process_object_detection
273_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosfeatureextractor
.md
No docstring available for YolosFeatureExtractor Methods: __call__ - pad - post_process_object_detection
273_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosmodel
.md
The bare YOLOS Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`YolosConfig`]): Model configuration class with all the parameters of the model.
273_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosmodel
.md
behavior. Parameters: config ([`YolosConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
273_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosforobjectdetection
.md
YOLOS Model (consisting of a ViT encoder) with object detection heads on top, for tasks such as COCO detection. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`YolosConfig`]): Model configuration class with all the parameters of the model.
273_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosforobjectdetection
.md
behavior. Parameters: config ([`YolosConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
273_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/
.md
<!--Copyright 2023 Mistral AI and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
274_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/
.md
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
274_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#overview
.md
Mistral was introduced in the [this blogpost](https://mistral.ai/news/announcing-mistral-7b/) by Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. The introduction of the blog post says:
274_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#overview
.md
The introduction of the blog post says: *Mistral AI team is proud to release Mistral 7B, the most powerful language model for its size to date.* Mistral-7B is the first large language model (LLM) released by [mistral.ai](https://mistral.ai/).
274_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#architectural-details
.md
Mistral-7B is a decoder-only Transformer with the following architectural choices: - Sliding Window Attention - Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens - GQA (Grouped Query Attention) - allowing faster inference and lower cache size. - Byte-fallback BPE tokenizer - ensures that characters are never mapped to out of vocabulary tokens. For more details refer to the [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
274_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#license
.md
`Mistral-7B` is released under the Apache 2.0 license.
274_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#usage-tips
.md
The Mistral team has released 3 checkpoints: - a base model, [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), which has been pre-trained to predict the next token on internet-scale data. - an instruction tuned model, [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), which is the base model optimized for chat purposes using supervised fine-tuning (SFT) and direct preference optimization (DPO).
274_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#usage-tips
.md
- an improved instruction tuned model, [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), which improves upon v1. The base model can be used as follows: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer
274_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#usage-tips
.md
>>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") >>> prompt = "My favourite condiment is" >>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") >>> model.to(device)
274_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#usage-tips
.md
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "My favourite condiment is to ..." ``` The instruction tuned model can be used as follows: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer
274_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#usage-tips
.md
>>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
274_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#usage-tips
.md
>>> messages = [ ... {"role": "user", "content": "What is your favourite condiment?"}, ... {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, ... {"role": "user", "content": "Do you have mayonnaise recipes?"} ... ] >>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
274_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#usage-tips
.md
>>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") >>> generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "Mayonnaise can be made as follows: (...)" ``` As can be seen, the instruction-tuned model requires a [chat template](../chat_templating) to be applied to make sure the inputs are prepared in the right format.
274_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#speeding-up-mistral-by-using-flash-attention
.md
The code snippets above showcase inference without any optimization tricks. However, one can drastically speed up the model by leveraging [Flash Attention](../perf_train_gpu_one#flash-attention-2), which is a faster implementation of the attention mechanism used inside the model. First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. ```bash pip install -U flash-attn --no-build-isolation ```
274_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#speeding-up-mistral-by-using-flash-attention
.md
```bash pip install -U flash-attn --no-build-isolation ``` Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). Make also sure to load your model in half-precision (e.g. `torch.float16`) To load and run a model using Flash Attention-2, refer to the snippet below: ```python >>> import torch
274_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#speeding-up-mistral-by-using-flash-attention
.md
To load and run a model using Flash Attention-2, refer to the snippet below: ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer
274_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#speeding-up-mistral-by-using-flash-attention
.md
>>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") >>> prompt = "My favourite condiment is" >>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") >>> model.to(device)
274_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#speeding-up-mistral-by-using-flash-attention
.md
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "My favourite condiment is to (...)" ```
274_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#expected-speedups
.md
Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using `mistralai/Mistral-7B-v0.1` checkpoint and the Flash Attention 2 version of the model. <div style="text-align: center"> <img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/mistral-7b-inference-large-seqlen.png"> </div>
274_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#sliding-window-attention
.md
The current implementation supports the sliding window attention mechanism and memory efficient cache management. To enable sliding window attention, just make sure to have a `flash-attn` version that is compatible with sliding window attention (`>=2.3.0`).
274_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#sliding-window-attention
.md
The Flash Attention-2 model uses also a more memory efficient cache slicing mechanism - as recommended per the official implementation of Mistral model that use rolling cache mechanism we keep the cache size fixed (`self.config.sliding_window`), support batched generation only for `padding_side="left"` and use the absolute position of the current token to compute the positional embedding.
274_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#shrinking-down-mistral-using-quantization
.md
As the Mistral model has 7 billion parameters, that would require about 14GB of GPU RAM in half precision (float16), since each parameter is stored in 2 bytes. However, one can shrink down the size of the model using [quantization](../quantization.md). If the model is quantized to 4 bits (or half a byte per parameter),that requires only about 3.5GB of RAM.
274_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#shrinking-down-mistral-using-quantization
.md
Quantizing a model is as simple as passing a `quantization_config` to the model. Below, we'll leverage the BitsAndyBytes quantization (but refer to [this page](../quantization.md) for other quantization methods): ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
274_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#shrinking-down-mistral-using-quantization
.md
>>> # specify how to quantize the model >>> quantization_config = BitsAndBytesConfig( ... load_in_4bit=True, ... bnb_4bit_quant_type="nf4", ... bnb_4bit_compute_dtype="torch.float16", ... ) >>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", quantization_config=True, device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2") >>> prompt = "My favourite condiment is"
274_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#shrinking-down-mistral-using-quantization
.md
>>> prompt = "My favourite condiment is" >>> messages = [ ... {"role": "user", "content": "What is your favourite condiment?"}, ... {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, ... {"role": "user", "content": "Do you have mayonnaise recipes?"} ... ] >>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
274_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#shrinking-down-mistral-using-quantization
.md
>>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") >>> generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "The expected output" ``` This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada) and [Arthur Zucker](https://huggingface.co/ArthurZ) . The original code can be found [here](https://github.com/mistralai/mistral-src).
274_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mistral. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-generation"/>
274_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#resources
.md
<PipelineTag pipeline="text-generation"/> - A demo notebook to perform supervised fine-tuning (SFT) of Mistral-7B can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Mistral/Supervised_fine_tuning_(SFT)_of_an_LLM_using_Hugging_Face_tooling.ipynb). 🌎 - A [blog post](https://www.philschmid.de/fine-tune-llms-in-2024-with-trl) on how to fine-tune LLMs in 2024 using Hugging Face tooling. 🌎
274_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#resources
.md
- The [Alignment Handbook](https://github.com/huggingface/alignment-handbook) by Hugging Face includes scripts and recipes to perform supervised fine-tuning (SFT) and direct preference optimization with Mistral-7B. This includes scripts for full fine-tuning, QLoRa on a single GPU as well as multi-GPU fine-tuning. - [Causal language modeling task guide](../tasks/language_modeling)
274_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
This is the configuration class to store the configuration of a [`MistralModel`]. It is used to instantiate an Mistral model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Mistral-7B-v0.1 or Mistral-7B-Instruct-v0.1. [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
274_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
[mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32000):
274_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the Mistral model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MistralModel`] hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 14336): Dimension of the MLP representations.
274_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
intermediate_size (`int`, *optional*, defaults to 14336): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer encoder. num_key_value_heads (`int`, *optional*, defaults to 8): This is the number of key_value heads that should be used to implement Grouped Query Attention. If
274_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this
274_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`. head_dim (`int`, *optional*, defaults to `hidden_size // num_attention_heads`): The attention head dimension. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
274_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
max_position_embeddings (`int`, *optional*, defaults to `4096*32`): The maximum sequence length that this model might ever be used with. Mistral's sliding window attention allows sequence of up to 4096*32 tokens. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers.
274_10_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*): The id of the padding token. bos_token_id (`int`, *optional*, defaults to 1): The id of the "beginning-of-sequence" token. eos_token_id (`int`, *optional*, defaults to 2):
274_10_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
The id of the "beginning-of-sequence" token. eos_token_id (`int`, *optional*, defaults to 2): The id of the "end-of-sequence" token. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. sliding_window (`int`, *optional*, defaults to 4096): Sliding window attention window size. If not specified, will default to `4096`.
274_10_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
Sliding window attention window size. If not specified, will default to `4096`. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. ```python >>> from transformers import MistralModel, MistralConfig
274_10_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralconfig
.md
>>> # Initializing a Mistral 7B style configuration >>> configuration = MistralConfig() >>> # Initializing a model from the Mistral 7B style configuration >>> model = MistralModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
274_10_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralmodel
.md
The bare Mistral Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
274_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MistralConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
274_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`MistralDecoderLayer`] Args: config: MistralConfig Methods: forward
274_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralforcausallm
.md
No docstring available for MistralForCausalLM Methods: forward
274_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralforsequenceclassification
.md
The Mistral Model transformer with a sequence classification head on top (linear layer). [`MistralForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
274_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralforsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
274_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralforsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
274_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralforsequenceclassification
.md
and behavior. Parameters: config ([`MistralConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
274_13_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralfortokenclassification
.md
The Mistral Model transformer with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
274_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralfortokenclassification
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MistralConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
274_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralfortokenclassification
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
274_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralforquestionanswering
.md
The Mistral Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
274_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MistralConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
274_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#mistralforquestionanswering
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
274_15_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#flaxmistralmodel
.md
No docstring available for FlaxMistralModel Methods: __call__
274_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#flaxmistralforcausallm
.md
No docstring available for FlaxMistralForCausalLM Methods: __call__
274_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#tfmistralmodel
.md
No docstring available for TFMistralModel Methods: call
274_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#tfmistralforcausallm
.md
No docstring available for TFMistralForCausalLM Methods: call
274_19_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mistral.md
https://huggingface.co/docs/transformers/en/model_doc/mistral/#tfmistralforsequenceclassification
.md
No docstring available for TFMistralForSequenceClassification Methods: call
274_20_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
275_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
275_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#overview
.md
The Swin2SR model was proposed in [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. Swin2SR improves the [SwinIR](https://github.com/JingyunLiang/SwinIR/) model by incorporating [Swin Transformer v2](swinv2) layers which mitigates issues such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data.
275_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#overview
.md
and fine-tuning, and hunger on data. The abstract from the paper is the following:
275_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#overview
.md
*Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional
275_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#overview
.md
images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks.
275_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#overview
.md
In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image
275_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#overview
.md
tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the "AIM 2022 Challenge on Super-Resolution of Compressed Image and Video".*
275_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/swin2sr_architecture.png" alt="drawing" width="600"/> <small> Swin2SR architecture. Taken from the <a href="https://arxiv.org/abs/2209.11345">original paper.</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/mv-lab/swin2sr).
275_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#resources
.md
Demo notebooks for Swin2SR can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Swin2SR). A demo Space for image super-resolution with SwinSR can be found [here](https://huggingface.co/spaces/jjourney1125/swin2sr).
275_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srimageprocessor
.md
Constructs a Swin2SR image processor. Args: do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. Methods: preprocess
275_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srconfig
.md
This is the configuration class to store the configuration of a [`Swin2SRModel`]. It is used to instantiate a Swin Transformer v2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Swin Transformer v2 [caidas/swin2sr-classicalsr-x2-64](https://huggingface.co/caidas/swin2sr-classicalsr-x2-64) architecture.
275_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srconfig
.md
[caidas/swin2sr-classicalsr-x2-64](https://huggingface.co/caidas/swin2sr-classicalsr-x2-64) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: image_size (`int`, *optional*, defaults to 64): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 1): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3):
275_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srconfig
.md
The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. num_channels_out (`int`, *optional*, defaults to `num_channels`): The number of output channels. If not set, it will be set to `num_channels`. embed_dim (`int`, *optional*, defaults to 180): Dimensionality of patch embedding. depths (`list(int)`, *optional*, defaults to `[6, 6, 6, 6, 6, 6]`): Depth of each layer in the Transformer encoder.
275_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srconfig
.md
depths (`list(int)`, *optional*, defaults to `[6, 6, 6, 6, 6, 6]`): Depth of each layer in the Transformer encoder. num_heads (`list(int)`, *optional*, defaults to `[6, 6, 6, 6, 6, 6]`): Number of attention heads in each layer of the Transformer encoder. window_size (`int`, *optional*, defaults to 8): Size of windows. mlp_ratio (`float`, *optional*, defaults to 2.0): Ratio of MLP hidden dimensionality to embedding dimensionality. qkv_bias (`bool`, *optional*, defaults to `True`):
275_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srconfig
.md
Ratio of MLP hidden dimensionality to embedding dimensionality. qkv_bias (`bool`, *optional*, defaults to `True`): Whether or not a learnable bias should be added to the queries, keys and values. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings and encoder. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
275_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srconfig
.md
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. drop_path_rate (`float`, *optional*, defaults to 0.1): Stochastic depth rate. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. use_absolute_embeddings (`bool`, *optional*, defaults to `False`):
275_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srconfig
.md
`"selu"` and `"gelu_new"` are supported. use_absolute_embeddings (`bool`, *optional*, defaults to `False`): Whether or not to add absolute position embeddings to the patch embeddings. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. upscale (`int`, *optional*, defaults to 2):
275_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srconfig
.md
The epsilon used by the layer normalization layers. upscale (`int`, *optional*, defaults to 2): The upscale factor for the image. 2/3/4/8 for image super resolution, 1 for denoising and compress artifact reduction img_range (`float`, *optional*, defaults to 1.0): The range of the values of the input image. resi_connection (`str`, *optional*, defaults to `"1conv"`): The convolutional block to use before the residual connection in each stage. upsampler (`str`, *optional*, defaults to `"pixelshuffle"`):
275_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srconfig
.md
upsampler (`str`, *optional*, defaults to `"pixelshuffle"`): The reconstruction reconstruction module. Can be 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None. Example: ```python >>> from transformers import Swin2SRConfig, Swin2SRModel
275_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srconfig
.md
>>> # Initializing a Swin2SR caidas/swin2sr-classicalsr-x2-64 style configuration >>> configuration = Swin2SRConfig() >>> # Initializing a model (with random weights) from the caidas/swin2sr-classicalsr-x2-64 style configuration >>> model = Swin2SRModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
275_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srmodel
.md
The bare Swin2SR Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Swin2SRConfig`]): Model configuration class with all the parameters of the model.
275_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srmodel
.md
behavior. Parameters: config ([`Swin2SRConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
275_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srforimagesuperresolution
.md
Swin2SR Model transformer with an upsampler head on top for image super resolution and restoration. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Swin2SRConfig`]): Model configuration class with all the parameters of the model.
275_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin2sr.md
https://huggingface.co/docs/transformers/en/model_doc/swin2sr/#swin2srforimagesuperresolution
.md
behavior. Parameters: config ([`Swin2SRConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
275_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
276_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
276_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#overview
.md
The FLAVA model was proposed in [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela and is accepted at CVPR 2022. The paper aims at creating a single unified foundation model which can work across vision, language as well as vision-and-language multimodal tasks. The abstract from the paper is the following:
276_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#overview
.md
as well as vision-and-language multimodal tasks. The abstract from the paper is the following: *State-of-the-art vision and vision-and-language models rely on large-scale visio-linguistic pretraining for obtaining good performance on a variety of downstream tasks. Generally, such models are often either cross-modal (contrastive) or multi-modal (with earlier fusion) but not both; and they often only target specific modalities or tasks. A promising
276_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#overview
.md
(with earlier fusion) but not both; and they often only target specific modalities or tasks. A promising direction would be to use a single holistic universal model, as a "foundation", that targets all modalities at once -- a true vision and language foundation model should be good at vision tasks, language tasks, and cross- and multi-modal vision and language tasks. We introduce FLAVA as such a model and demonstrate impressive performance on a wide range of 35 tasks spanning these target modalities.*
276_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#overview
.md
impressive performance on a wide range of 35 tasks spanning these target modalities.* This model was contributed by [aps](https://huggingface.co/aps). The original code can be found [here](https://github.com/facebookresearch/multimodal/tree/main/examples/flava).
276_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaconfig
.md
[`FlavaConfig`] is the configuration class to store the configuration of a [`FlavaModel`]. It is used to instantiate FLAVA model according to the specified arguments, defining the text model, image model, image codebook and multimodal model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA [facebook/flava-full](https://huggingface.co/facebook/flava-full) architecture.
276_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaconfig
.md
that of the FLAVA [facebook/flava-full](https://huggingface.co/facebook/flava-full) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: text_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`FlavaTextConfig`]. image_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`FlavaImageConfig`].
276_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaconfig
.md
image_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`FlavaImageConfig`]. multimodal_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`FlavaMultimodalConfig`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. projection_dim (`int`, *optional*, defaults to 512):
276_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaconfig
.md
The epsilon used by the layer normalization layers. projection_dim (`int`, *optional*, defaults to 512): Dimensionality of text and image projection layers. logit_scale_init_value (`float`, *optional*, defaults to 2.6592): The initial value of the *logit_scale* parameter. Default is used as per the original FLAVA/CLIP implementation. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
276_2_3