source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinforimageclassification
.md
Swin Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. <Tip> Note that it's possible to fine-tune Swin on higher resolution images than the ones it has been trained on, by setting `interpolate_pos_encoding` to `True` in the forward of the model. This will interpolate the pre-trained position embeddings to the higher resolution. </Tip>
406_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinforimageclassification
.md
position embeddings to the higher resolution. </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwinConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
406_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinforimageclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
406_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#tfswinmodel
.md
No docstring available for TFSwinModel Methods: call
406_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#tfswinformaskedimagemodeling
.md
No docstring available for TFSwinForMaskedImageModeling Methods: call
406_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#tfswinforimageclassification
.md
No docstring available for TFSwinForImageClassification Methods: call </tf> </frameworkcontent>
406_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
407_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
407_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
The Perceiver IO model was proposed in [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. Perceiver IO is a generalization of [Perceiver](https://arxiv.org/abs/2103.03206) to handle arbitrary outputs in
407_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
Perceiver IO is a generalization of [Perceiver](https://arxiv.org/abs/2103.03206) to handle arbitrary outputs in addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio. This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is
407_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example, Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs. The abstract from the paper is the following:
407_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
The abstract from the paper is the following: *The recently-proposed Perceiver model obtains good results on several domains (images, audio, multimodal, point clouds) while scaling linearly in compute and memory with the input size. While the Perceiver supports many kinds of inputs, it can only produce very simple outputs such as class scores. Perceiver IO overcomes this limitation without sacrificing the original's appealing properties by learning to flexibly query the model's latent space to produce
407_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
sacrificing the original's appealing properties by learning to flexibly query the model's latent space to produce outputs of arbitrary size and semantics. Perceiver IO still decouples model depth from data size and still scales linearly with data size, but now with respect to both input and output sizes. The full Perceiver IO model achieves strong results on tasks with highly structured output spaces, such as natural language and visual understanding,
407_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
strong results on tasks with highly structured output spaces, such as natural language and visual understanding, StarCraft II, and multi-task and multi-modal domains. As highlights, Perceiver IO matches a Transformer-based BERT baseline on the GLUE language benchmark without the need for input tokenization and achieves state-of-the-art performance on Sintel optical flow estimation.* Here's a TLDR explaining how Perceiver works:
407_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
performance on Sintel optical flow estimation.* Here's a TLDR explaining how Perceiver works: The main problem with the self-attention mechanism of the Transformer is that the time and memory requirements scale quadratically with the sequence length. Hence, models like BERT and RoBERTa are limited to a max sequence length of 512 tokens. Perceiver aims to solve this issue by, instead of performing self-attention on the inputs, perform it on a set
407_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
tokens. Perceiver aims to solve this issue by, instead of performing self-attention on the inputs, perform it on a set of latent variables, and only use the inputs for cross-attention. In this way, the time and memory requirements don't depend on the length of the inputs anymore, as one uses a fixed amount of latent variables, like 256 or 512. These are randomly initialized, after which they are trained end-to-end using backpropagation.
407_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
randomly initialized, after which they are trained end-to-end using backpropagation. Internally, [`PerceiverModel`] will create the latents, which is a tensor of shape `(batch_size, num_latents, d_latents)`. One must provide `inputs` (which could be text, images, audio, you name it!) to the model, which it will use to perform cross-attention with the latents. The output of the Perceiver encoder is a tensor of the same shape. One
407_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
use to perform cross-attention with the latents. The output of the Perceiver encoder is a tensor of the same shape. One can then, similar to BERT, convert the last hidden states of the latents to classification logits by averaging along the sequence dimension, and placing a linear layer on top of that to project the `d_latents` to `num_labels`. This was the idea of the original Perceiver paper. However, it could only output classification logits. In a follow-up
407_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
This was the idea of the original Perceiver paper. However, it could only output classification logits. In a follow-up work, PerceiverIO, they generalized it to let the model also produce outputs of arbitrary size. How, you might ask? The idea is actually relatively simple: one defines outputs of an arbitrary size, and then applies cross-attention with the last hidden states of the latents, using the outputs as queries, and the latents as keys and values.
407_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
last hidden states of the latents, using the outputs as queries, and the latents as keys and values. So let's say one wants to perform masked language modeling (BERT-style) with the Perceiver. As the Perceiver's input length will not have an impact on the computation time of the self-attention layers, one can provide raw bytes, providing `inputs` of length 2048 to the model. If one now masks out certain of these 2048 tokens, one can define the
407_1_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
providing `inputs` of length 2048 to the model. If one now masks out certain of these 2048 tokens, one can define the `outputs` as being of shape: `(batch_size, 2048, 768)`. Next, one performs cross-attention with the final hidden states of the latents to update the `outputs` tensor. After cross-attention, one still has a tensor of shape `(batch_size, 2048, 768)`. One can then place a regular language modeling head on top, to project the last dimension to the
407_1_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
2048, 768)`. One can then place a regular language modeling head on top, to project the last dimension to the vocabulary size of the model, i.e. creating logits of shape `(batch_size, 2048, 262)` (as Perceiver uses a vocabulary size of 262 byte IDs). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
407_1_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
alt="drawing" width="600"/> <small> Perceiver IO architecture. Taken from the <a href="https://arxiv.org/abs/2105.15203">original paper</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/deepmind/deepmind-research/tree/master/perceiver). <Tip warning={true}> Perceiver does **not** work with `torch.nn.DataParallel` due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035)
407_1_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
.md
</Tip>
407_1_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#resources
.md
- The quickest way to get started with the Perceiver is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Perceiver). - Refer to the [blog post](https://huggingface.co/blog/perceiver) if you want to fully understand how the model works and is implemented in the library. Note that the models available in the library only showcase some examples of what you can do
407_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#resources
.md
is implemented in the library. Note that the models available in the library only showcase some examples of what you can do with the Perceiver. There are many more use cases, including question answering, named-entity recognition, object detection, audio classification, video classification, etc. - [Text classification task guide](../tasks/sequence_classification) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Image classification task guide](../tasks/image_classification)
407_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
models.perceiver.modeling_perceiver.PerceiverModelOutput Base class for Perceiver base model's outputs, with potential hidden states, attentions and cross-attentions. Args: logits (`torch.FloatTensor` of shape `(batch_size, num_labels)`): Classification (or regression if config.num_labels==1) scores (before SoftMax). last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the model.
407_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
Sequence of hidden-states at the output of the last layer of the model. hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs.
407_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
407_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
the self-attention heads. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
407_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
used to compute the weighted average in the cross-attention heads. models.perceiver.modeling_perceiver.PerceiverDecoderOutput Base class for Perceiver decoder outputs, with potential cross-attentions. Args: logits (`torch.FloatTensor` of shape `(batch_size, num_labels)`): Output of the basic decoder. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
407_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput Base class for Perceiver's masked language model outputs. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
407_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): Masked language modeling (MLM) loss. logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
407_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, num_latents,
407_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, num_latents, num_latents)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
407_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. models.perceiver.modeling_perceiver.PerceiverClassifierOutput Base class for Perceiver's outputs of sequence/image classification models, optical flow and multimodal autoencoding. Args:
407_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
Base class for Perceiver's outputs of sequence/image classification models, optical flow and multimodal autoencoding. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): Classification (or regression if config.num_labels==1) loss. logits (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`): Classification (or regression if config.num_labels==1) scores (before SoftMax).
407_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs.
407_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
407_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
.md
the self-attention heads. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
407_3_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
This is the configuration class to store the configuration of a [`PerceiverModel`]. It is used to instantiate an Perceiver model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Perceiver [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) architecture.
407_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
[deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_latents (`int`, *optional*, defaults to 256): The number of latents. d_latents (`int`, *optional*, defaults to 1280): Dimension of the latent embeddings. d_model (`int`, *optional*, defaults to 768):
407_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
Dimension of the latent embeddings. d_model (`int`, *optional*, defaults to 768): Dimension of the inputs. Should only be provided in case [*PerceiverTextPreprocessor*] is used or no preprocessor is provided. num_blocks (`int`, *optional*, defaults to 1): Number of blocks in the Transformer encoder. num_self_attends_per_block (`int`, *optional*, defaults to 26): The number of self-attention layers per block. num_self_attention_heads (`int`, *optional*, defaults to 8):
407_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
The number of self-attention layers per block. num_self_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each self-attention layer in the Transformer encoder. num_cross_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each cross-attention layer in the Transformer encoder. qk_channels (`int`, *optional*): Dimension to project the queries + keys before applying attention in the cross-attention and self-attention
407_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
Dimension to project the queries + keys before applying attention in the cross-attention and self-attention layers of the encoder. Will default to preserving the dimension of the queries if not specified. v_channels (`int`, *optional*): Dimension to project the values before applying attention in the cross-attention and self-attention layers of the encoder. Will default to preserving the dimension of the queries if not specified. cross_attention_shape_for_attention (`str`, *optional*, defaults to `"kv"`):
407_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
cross_attention_shape_for_attention (`str`, *optional*, defaults to `"kv"`): Dimension to use when downsampling the queries and keys in the cross-attention layer of the encoder. self_attention_widening_factor (`int`, *optional*, defaults to 1): Dimension of the feed-forward layer in the cross-attention layer of the Transformer encoder. cross_attention_widening_factor (`int`, *optional*, defaults to 1): Dimension of the feed-forward layer in the self-attention layers of the Transformer encoder.
407_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
Dimension of the feed-forward layer in the self-attention layers of the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02):
407_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. use_query_residual (`float`, *optional*, defaults to `True`): Whether to add a query residual in the cross-attention layer of the encoder. vocab_size (`int`, *optional*, defaults to 262):
407_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
Whether to add a query residual in the cross-attention layer of the encoder. vocab_size (`int`, *optional*, defaults to 262): Vocabulary size for the masked language modeling model. max_position_embeddings (`int`, *optional*, defaults to 2048): The maximum sequence length that the masked language modeling model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). image_size (`int`, *optional*, defaults to 56):
407_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
this to something large just in case (e.g., 512 or 1024 or 2048). image_size (`int`, *optional*, defaults to 56): Size of the images after preprocessing, for [`PerceiverForImageClassificationLearned`]. train_size (`List[int]`, *optional*, defaults to `[368, 496]`): Training size of the images for the optical flow model. num_frames (`int`, *optional*, defaults to 16): Number of video frames used for the multimodal autoencoding model. audio_samples_per_frame (`int`, *optional*, defaults to 1920):
407_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
audio_samples_per_frame (`int`, *optional*, defaults to 1920): Number of audio samples per frame for the multimodal autoencoding model. samples_per_patch (`int`, *optional*, defaults to 16): Number of audio samples per patch when preprocessing the audio for the multimodal autoencoding model. output_shape (`List[int]`, *optional*, defaults to `[1, 16, 224, 224]`): Shape of the output (batch_size, num_frames, height, width) for the video decoder queries of the multimodal
407_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
Shape of the output (batch_size, num_frames, height, width) for the video decoder queries of the multimodal autoencoding model. This excludes the channel dimension. output_num_channels (`int`, *optional*, defaults to 512): Number of output channels for each modalitiy decoder. Example: ```python >>> from transformers import PerceiverModel, PerceiverConfig
407_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
.md
>>> # Initializing a Perceiver deepmind/language-perceiver style configuration >>> configuration = PerceiverConfig() >>> # Initializing a model from the deepmind/language-perceiver style configuration >>> model = PerceiverModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
407_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivertokenizer
.md
Construct a Perceiver tokenizer. The Perceiver simply uses raw bytes utf-8 encoding. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. bos_token (`str`, *optional*, defaults to `"[BOS]"`):
407_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivertokenizer
.md
bos_token (`str`, *optional*, defaults to `"[BOS]"`): The BOS token (reserved in the vocab, but not actually used). eos_token (`str`, *optional*, defaults to `"[EOS]"`): The end of sequence token (reserved in the vocab, but not actually used). <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> mask_token (`str`, *optional*, defaults to `"[MASK]"`):
407_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivertokenizer
.md
The token used is the `sep_token`. </Tip> mask_token (`str`, *optional*, defaults to `"[MASK]"`): The MASK token, useful for masked language modeling. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The CLS token (reserved in the vocab, but not actually used). sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from two sequences. Methods: __call__
407_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverfeatureextractor
.md
No docstring available for PerceiverFeatureExtractor Methods: __call__
407_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimageprocessor
.md
Constructs a Perceiver image processor. Args: do_center_crop (`bool`, `optional`, defaults to `True`): Whether or not to center crop the image. If the input size if smaller than `crop_size` along any edge, the image will be padded with zeros and then center cropped. Can be overridden by the `do_center_crop` parameter in the `preprocess` method. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 256, "width": 256}`):
407_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimageprocessor
.md
parameter in the `preprocess` method. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 256, "width": 256}`): Desired output size when applying center-cropping. Can be overridden by the `crop_size` parameter in the `preprocess` method. do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image to `(size["height"], size["width"])`. Can be overridden by the `do_resize` parameter in the `preprocess` method.
407_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimageprocessor
.md
parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"height": 224, "width": 224}`): Size of the image after resizing. Can be overridden by the `size` parameter in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): Defines the resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
407_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimageprocessor
.md
in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Defines the scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize:
407_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimageprocessor
.md
in the `preprocess` method. do_normalize: Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
407_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimageprocessor
.md
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Methods: preprocess
407_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivertextpreprocessor
.md
models.perceiver.modeling_perceiver.PerceiverTextPreprocessor Text preprocessing for Perceiver Encoder. Can be used to embed `inputs` and add positional encodings. The dimensionality of the embeddings is determined by the `d_model` attribute of the configuration. Args: config ([`PerceiverConfig`]): Model configuration.
407_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimagepreprocessor
.md
models.perceiver.modeling_perceiver.PerceiverImagePreprocessor Image preprocessing for Perceiver Encoder. Note: the *out_channels* argument refers to the output channels of a convolutional layer, if *prep_type* is set to "conv1x1" or "conv". If one adds absolute position embeddings, one must make sure the *num_channels* of the position encoding kwargs are set equal to the *out_channels*. Args: config ([*PerceiverConfig*]): Model configuration. prep_type (`str`, *optional*, defaults to `"conv"`):
407_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimagepreprocessor
.md
Args: config ([*PerceiverConfig*]): Model configuration. prep_type (`str`, *optional*, defaults to `"conv"`): Preprocessing type. Can be "conv1x1", "conv", "patches", "pixels". spatial_downsample (`int`, *optional*, defaults to 4): Spatial downsampling factor. temporal_downsample (`int`, *optional*, defaults to 1): Temporal downsampling factor (only relevant in case a time dimension is present). position_encoding_type (`str`, *optional*, defaults to `"fourier"`):
407_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimagepreprocessor
.md
position_encoding_type (`str`, *optional*, defaults to `"fourier"`): Position encoding type. Can be "fourier" or "trainable". in_channels (`int`, *optional*, defaults to 3): Number of channels in the input. out_channels (`int`, *optional*, defaults to 64): Number of channels in the output. conv_after_patching (`bool`, *optional*, defaults to `False`): Whether to apply a convolutional layer after patching. conv_after_patching_in_channels (`int`, *optional*, defaults to 54):
407_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimagepreprocessor
.md
Whether to apply a convolutional layer after patching. conv_after_patching_in_channels (`int`, *optional*, defaults to 54): Number of channels in the input of the convolutional layer after patching. conv2d_use_batchnorm (`bool`, *optional*, defaults to `True`): Whether to use batch normalization in the convolutional layer. concat_or_add_pos (`str`, *optional*, defaults to `"concat"`): How to concatenate the position encoding to the input. Can be "concat" or "add".
407_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimagepreprocessor
.md
How to concatenate the position encoding to the input. Can be "concat" or "add". project_pos_dim (`int`, *optional*, defaults to -1): Dimension of the position encoding to project to. If -1, no projection is applied. **position_encoding_kwargs (`Dict`, *optional*): Keyword arguments for the position encoding.
407_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiveronehotpreprocessor
.md
models.perceiver.modeling_perceiver.PerceiverOneHotPreprocessor One-hot preprocessor for Perceiver Encoder. Can be used to add a dummy index dimension to the input. Args: config ([`PerceiverConfig`]): Model configuration.
407_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiveraudiopreprocessor
.md
models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor Audio preprocessing for Perceiver Encoder. Args: config ([*PerceiverConfig*]): Model configuration. prep_type (`str`, *optional*, defaults to `"patches"`): Preprocessor type to use. Only "patches" is supported. samples_per_patch (`int`, *optional*, defaults to 96): Number of samples per patch. position_encoding_type (`str`, *optional*, defaults to `"fourier"`): Type of position encoding to use. Can be "trainable" or "fourier".
407_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiveraudiopreprocessor
.md
Type of position encoding to use. Can be "trainable" or "fourier". concat_or_add_pos (`str`, *optional*, defaults to `"concat"`): How to concatenate the position encoding to the input. Can be "concat" or "add". out_channels (`int`, *optional*, defaults to 64): Number of channels in the output. project_pos_dim (`int`, *optional*, defaults to -1): Dimension of the position encoding to project to. If -1, no projection is applied. **position_encoding_kwargs (`Dict`, *optional*):
407_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiveraudiopreprocessor
.md
**position_encoding_kwargs (`Dict`, *optional*): Keyword arguments for the position encoding.
407_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermultimodalpreprocessor
.md
models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor Multimodal preprocessing for Perceiver Encoder. Inputs for each modality are preprocessed, then padded with trainable position embeddings to have the same number of channels. Args: modalities (`Mapping[str, PreprocessorType]`): Dict mapping modality name to preprocessor. mask_probs (`Dict[str, float]`): Dict mapping modality name to masking probability of that modality. min_padding_size (`int`, *optional*, defaults to 2):
407_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermultimodalpreprocessor
.md
Dict mapping modality name to masking probability of that modality. min_padding_size (`int`, *optional*, defaults to 2): The minimum padding size for all modalities. The final output will have num_channels equal to the maximum channels across all modalities plus min_padding_size.
407_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverprojectiondecoder
.md
models.perceiver.modeling_perceiver.PerceiverProjectionDecoder Baseline projection decoder (no cross-attention). Args: config ([`PerceiverConfig`]): Model configuration.
407_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverbasicdecoder
.md
models.perceiver.modeling_perceiver.PerceiverBasicDecoder Cross-attention-based decoder. This class can be used to decode the final hidden states of the latents using a cross-attention operation, in which the latents produce keys and values. The shape of the output of this class depends on how one defines the output queries (also called decoder queries). Args: config ([*PerceiverConfig*]): Model configuration. output_num_channels (`int`, *optional*):
407_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverbasicdecoder
.md
Args: config ([*PerceiverConfig*]): Model configuration. output_num_channels (`int`, *optional*): The number of channels in the output. Will only be used in case *final_project* is set to `True`. position_encoding_type (`str`, *optional*, defaults to "trainable"): The type of position encoding to use. Can be either "trainable", "fourier", or "none". output_index_dims (`int`, *optional*): The number of dimensions of the output queries. Ignored if 'position_encoding_type' == 'none'.
407_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverbasicdecoder
.md
The number of dimensions of the output queries. Ignored if 'position_encoding_type' == 'none'. num_channels (`int`, *optional*, defaults to 128): The number of channels of the decoder queries. Ignored if 'position_encoding_type' == 'none'. qk_channels (`int`, *optional*): The number of channels of the queries and keys in the cross-attention layer. v_channels (`int`, *optional*): The number of channels of the values in the cross-attention layer. num_heads (`int`, *optional*, defaults to 1):
407_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverbasicdecoder
.md
The number of channels of the values in the cross-attention layer. num_heads (`int`, *optional*, defaults to 1): The number of attention heads in the cross-attention layer. widening_factor (`int`, *optional*, defaults to 1): The widening factor of the cross-attention layer. use_query_residual (`bool`, *optional*, defaults to `False`): Whether to use a residual connection between the query and the output of the cross-attention layer. concat_preprocessed_input (`bool`, *optional*, defaults to `False`):
407_14_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverbasicdecoder
.md
concat_preprocessed_input (`bool`, *optional*, defaults to `False`): Whether to concatenate the preprocessed input to the query. final_project (`bool`, *optional*, defaults to `True`): Whether to project the output of the cross-attention layer to a target dimension. position_encoding_only (`bool`, *optional*, defaults to `False`): Whether to only use this class to define output queries.
407_14_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverclassificationdecoder
.md
models.perceiver.modeling_perceiver.PerceiverClassificationDecoder Cross-attention based classification decoder. Light-weight wrapper of [`PerceiverBasicDecoder`] for logit output. Will turn the output of the Perceiver encoder which is of shape (batch_size, num_latents, d_latents) to a tensor of shape (batch_size, num_labels). The queries are of shape (batch_size, 1, num_labels). Args: config ([`PerceiverConfig`]): Model configuration.
407_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiveropticalflowdecoder
.md
models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder Cross-attention based optical flow decoder.
407_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverbasicvideoautoencodingdecoder
.md
models.perceiver.modeling_perceiver.PerceiverBasicVideoAutoencodingDecoder Cross-attention based video-autoencoding decoder. Light-weight wrapper of [*PerceiverBasicDecoder*] with video reshaping logic. Args: config ([*PerceiverConfig*]): Model configuration. output_shape (`List[int]`): Shape of the output as (batch_size, num_frames, height, width), excluding the channel dimension. position_encoding_type (`str`): The type of position encoding to use. Can be either "trainable", "fourier", or "none".
407_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermultimodaldecoder
.md
models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder Multimodal decoding by composing uni-modal decoders. The *modalities* argument of the constructor is a dictionary mapping modality name to the decoder of that modality. That decoder will be used to construct queries for that modality. Modality-specific queries are padded with trainable modality-specific parameters, after which they are concatenated along the time dimension.
407_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermultimodaldecoder
.md
concatenated along the time dimension. Next, there is a shared cross attention operation across all modalities. Args: config ([*PerceiverConfig*]): Model configuration. modalities (`Dict[str, PerceiverAbstractDecoder]`): Dictionary mapping modality name to the decoder of that modality. num_outputs (`int`): The number of outputs of the decoder. output_num_channels (`int`): The number of channels in the output. min_padding_size (`int`, *optional*, defaults to 2):
407_18_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermultimodaldecoder
.md
output_num_channels (`int`): The number of channels in the output. min_padding_size (`int`, *optional*, defaults to 2): The minimum padding size for all modalities. The final output will have num_channels equal to the maximum channels across all modalities plus min_padding_size. subsampled_index_dims (`Dict[str, PerceiverAbstractDecoder]`, *optional*): Dictionary mapping modality name to the subsampled index dimensions to use for the decoder query of that modality.
407_18_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverprojectionpostprocessor
.md
models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor Projection postprocessing for Perceiver. Can be used to project the channels of the decoder output to a lower dimension. Args: in_channels (`int`): Number of channels in the input. out_channels (`int`): Number of channels in the output.
407_19_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiveraudiopostprocessor
.md
models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor Audio postprocessing for Perceiver. Can be used to convert the decoder output to audio features. Args: config ([*PerceiverConfig*]): Model configuration. in_channels (`int`): Number of channels in the input. postproc_type (`str`, *optional*, defaults to `"patches"`): Postprocessor type to use. Currently, only "patches" is supported.
407_20_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverclassificationpostprocessor
.md
models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor Classification postprocessing for Perceiver. Can be used to convert the decoder output to classification logits. Args: config ([*PerceiverConfig*]): Model configuration. in_channels (`int`): Number of channels in the input.
407_21_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermultimodalpostprocessor
.md
models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor Multimodal postprocessing for Perceiver. Can be used to combine modality-specific postprocessors into a single postprocessor. Args: modalities (`Mapping[str, PostprocessorType]`): Dictionary mapping modality name to postprocessor class for that modality. input_is_dict (`bool`, *optional*, defaults to `False`): If True, input is assumed to be dictionary structured, and outputs keep the same dictionary shape. If
407_22_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermultimodalpostprocessor
.md
If True, input is assumed to be dictionary structured, and outputs keep the same dictionary shape. If False, input is a tensor which is sliced up during postprocessing by *modality_sizes*.
407_22_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermodel
.md
The Perceiver: a scalable, fully attentional architecture. <Tip> Note that it's possible to fine-tune Perceiver on higher resolution images than the ones it has been trained on, by setting `interpolate_pos_encoding` to `True` in the forward of the model. This will interpolate the pre-trained position embeddings to the higher resolution. </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
407_23_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermodel
.md
</Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
407_23_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. decoder (*DecoderType*, *optional*): Optional decoder to use to decode the latent representation of the encoder. Examples include *transformers.models.perceiver.modeling_perceiver.PerceiverBasicDecoder*, *transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder*,
407_23_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermodel
.md
*transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder*, *transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder*. input_preprocessor (*PreprocessorType*, *optional*): Optional input preprocessor to use. Examples include *transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverTextPreprocessor*,
407_23_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermodel
.md
*transformers.models.perceiver.modeling_perceiver.PerceiverTextPreprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor*. output_postprocessor (*PostprocessorType*, *optional*): Optional output postprocessor to use. Examples include *transformers.models.perceiver.modeling_perceiver.PerceiverImagePostprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor*,
407_23_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermodel
.md
*transformers.models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor*. Note that you can define your own decoders, preprocessors and/or postprocessors to fit your use-case. Methods: forward
407_23_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformaskedlm
.md
Example use of Perceiver for masked language modeling. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
407_24_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformaskedlm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
407_24_1