source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
just in case (e.g., 512 or 1024 or 2048). type_vocab_sizes (`List[int]`, *optional*, defaults to `[3, 256, 256, 2, 256, 256, 10]`): The vocabulary sizes of the `token_type_ids` passed when calling [`TapasModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers.
307_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. positive_label_weight (`float`, *optional*, defaults to 10.0): Weight for positive labels. num_aggregation_labels (`int`, *optional*, defaults to 0): The number of aggregation operators to predict. aggregation_loss_weight (`float`, *optional*, defaults to 1.0): Importance weight for the aggregation loss. use_answer_as_supervision (`bool`, *optional*):
307_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Importance weight for the aggregation loss. use_answer_as_supervision (`bool`, *optional*): Whether to use the answer as the only supervision for aggregation examples. answer_loss_importance (`float`, *optional*, defaults to 1.0): Importance weight for the regression loss. use_normalized_answer_loss (`bool`, *optional*, defaults to `False`): Whether to normalize the answer loss by the maximum of the predicted and expected value. huber_loss_delta (`float`, *optional*):
307_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
huber_loss_delta (`float`, *optional*): Delta parameter used to calculate the regression loss. temperature (`float`, *optional*, defaults to 1.0): Value used to control (OR change) the skewness of cell logits probabilities. aggregation_temperature (`float`, *optional*, defaults to 1.0): Scales aggregation logits to control the skewness of probabilities. use_gumbel_for_cells (`bool`, *optional*, defaults to `False`): Whether to apply Gumbel-Softmax to cell selection.
307_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
use_gumbel_for_cells (`bool`, *optional*, defaults to `False`): Whether to apply Gumbel-Softmax to cell selection. use_gumbel_for_aggregation (`bool`, *optional*, defaults to `False`): Whether to apply Gumbel-Softmax to aggregation selection. average_approximation_function (`string`, *optional*, defaults to `"ratio"`): Method to calculate the expected average of cells in the weak supervision case. One of `"ratio"`, `"first_order"` or `"second_order"`. cell_selection_preference (`float`, *optional*):
307_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
`"first_order"` or `"second_order"`. cell_selection_preference (`float`, *optional*): Preference for cell selection in ambiguous cases. Only applicable in case of weak supervision for aggregation (WTQ, WikiSQL). If the total mass of the aggregation probabilities (excluding the "NONE" operator) is higher than this hyperparameter, then aggregation is predicted for an example. answer_loss_cutoff (`float`, *optional*): Ignore examples with answer loss larger than cutoff.
307_5_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
answer_loss_cutoff (`float`, *optional*): Ignore examples with answer loss larger than cutoff. max_num_rows (`int`, *optional*, defaults to 64): Maximum number of rows. max_num_columns (`int`, *optional*, defaults to 32): Maximum number of columns. average_logits_per_cell (`bool`, *optional*, defaults to `False`): Whether to average logits per cell. select_one_column (`bool`, *optional*, defaults to `True`): Whether to constrain the model to only select cells from a single column.
307_5_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Whether to constrain the model to only select cells from a single column. allow_empty_column_selection (`bool`, *optional*, defaults to `False`): Whether to allow not to select any column. init_cell_selection_weights_to_zero (`bool`, *optional*, defaults to `False`): Whether to initialize cell selection weights to 0 so that the initial probabilities are 50%. reset_position_index_per_cell (`bool`, *optional*, defaults to `True`):
307_5_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
reset_position_index_per_cell (`bool`, *optional*, defaults to `True`): Whether to restart position indexes at every cell (i.e. use relative position embeddings). disable_per_token_loss (`bool`, *optional*, defaults to `False`): Whether to disable any (strong or weak) supervision on cells. aggregation_labels (`Dict[int, label]`, *optional*): The aggregation labels used to aggregate the results. For example, the WTQ models have the following
307_5_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
The aggregation labels used to aggregate the results. For example, the WTQ models have the following aggregation labels: `{0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"}` no_aggregation_label_index (`int`, *optional*): If the aggregation labels are defined and one of these labels represents "No aggregation", this should be set to its index. For example, the WTQ models have the "NONE" aggregation label at index 0, so that value should be set to 0 for these models. Example: ```python
307_5_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
should be set to 0 for these models. Example: ```python >>> from transformers import TapasModel, TapasConfig
307_5_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
>>> # Initializing a default (SQA) Tapas configuration >>> configuration = TapasConfig() >>> # Initializing a model from the configuration >>> model = TapasModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` Construct a TAPAS tokenizer. Based on WordPiece. Flattens a table and one or more related sentences to be used by TAPAS models. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
307_5_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. [`TapasTokenizer`] creates several token type ids to encode tabular structure. To be more precise, it adds 7 token type ids, in the following order: `segment_ids`, `column_ids`, `row_ids`, `prev_labels`, `column_ranks`, `inv_column_ranks` and `numeric_relations`:
307_5_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
`column_ids`, `row_ids`, `prev_labels`, `column_ranks`, `inv_column_ranks` and `numeric_relations`: - segment_ids: indicate whether a token belongs to the question (0) or the table (1). 0 for special tokens and padding. - column_ids: indicate to which column of the table a token belongs (starting from 1). Is 0 for all question tokens, special tokens and padding. - row_ids: indicate to which row of the table a token belongs (starting from 1). Is 0 for all question tokens,
307_5_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
- row_ids: indicate to which row of the table a token belongs (starting from 1). Is 0 for all question tokens, special tokens and padding. Tokens of column headers are also 0. - prev_labels: indicate whether a token was (part of) an answer to the previous question (1) or not (0). Useful in a conversational setup (such as SQA). - column_ranks: indicate the rank of a table token relative to a column, if applicable. For example, if you have a
307_5_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
- column_ranks: indicate the rank of a table token relative to a column, if applicable. For example, if you have a column "number of movies" with values 87, 53 and 69, then the column ranks of these tokens are 3, 1 and 2 respectively. 0 for all question tokens, special tokens and padding. - inv_column_ranks: indicate the inverse rank of a table token relative to a column, if applicable. For example, if
307_5_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
- inv_column_ranks: indicate the inverse rank of a table token relative to a column, if applicable. For example, if you have a column "number of movies" with values 87, 53 and 69, then the inverse column ranks of these tokens are 1, 3 and 2 respectively. 0 for all question tokens, special tokens and padding. - numeric_relations: indicate numeric relations between the question and the tokens of the table. 0 for all question tokens, special tokens and padding.
307_5_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
question tokens, special tokens and padding. [`TapasTokenizer`] runs end-to-end tokenization on a table and associated sentences: punctuation splitting and wordpiece. Args: vocab_file (`str`): File containing the vocabulary. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`): Whether or not to do basic tokenization before WordPiece. never_split (`Iterable`, *optional*):
307_5_27
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Whether or not to do basic tokenization before WordPiece. never_split (`Iterable`, *optional*): Collection of tokens which will never be split during tokenization. Only has an effect when `do_basic_tokenize=True` unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`):
307_5_28
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths.
307_5_29
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`):
307_5_30
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. empty_token (`str`, *optional*, defaults to `"[EMPTY]"`): The token used for empty cell values in a table. Empty cell values include "", "n/a", "nan" and "?". tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
307_5_31
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this [issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT). cell_trim_length (`int`, *optional*, defaults to -1):
307_5_32
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
value for `lowercase` (as in the original BERT). cell_trim_length (`int`, *optional*, defaults to -1): If > 0: Trim cells so that the length is <= this value. Also disables further cell trimming, should thus be used with `truncation` set to `True`. max_column_id (`int`, *optional*): Max column id to extract. max_row_id (`int`, *optional*): Max row id to extract. strip_column_names (`bool`, *optional*, defaults to `False`): Whether to add empty strings instead of column names.
307_5_33
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
strip_column_names (`bool`, *optional*, defaults to `False`): Whether to add empty strings instead of column names. update_answer_coordinates (`bool`, *optional*, defaults to `False`): Whether to recompute the answer coordinates from the answer text. min_question_length (`int`, *optional*): Minimum length of each question in terms of tokens (will be skipped otherwise). max_question_length (`int`, *optional*): Maximum length of each question in terms of tokens (will be skipped otherwise).
307_5_34
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
max_question_length (`int`, *optional*): Maximum length of each question in terms of tokens (will be skipped otherwise). clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`): Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces. Methods: __call__ - convert_logits_to_predictions - save_vocabulary <frameworkcontent> <pt> The bare Tapas Model transformer outputting raw hidden-states without any specific head on top.
307_5_35
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
<frameworkcontent> <pt> The bare Tapas Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
307_5_36
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`TapasConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
307_5_37
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. This class is a small change compared to [`BertModel`], taking into account the additional token type ids. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
307_5_38
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. Methods: forward Tapas Model with a `language modeling` head on top.
307_5_39
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Methods: forward Tapas Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
307_5_40
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`TapasConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
307_5_41
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward Tapas Model with a sequence classification head on top (a linear layer on top of the pooled output), e.g. for table entailment tasks, such as TabFact (Chen et al., 2020). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
307_5_42
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
307_5_43
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
and behavior. Parameters: config ([`TapasConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward Tapas Model with a cell selection head and optional aggregation head on top for question-answering tasks on tables
307_5_44
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Tapas Model with a cell selection head and optional aggregation head on top for question-answering tasks on tables (linear layers on top of the hidden-states output to compute `logits` and optional `logits_aggregation`), e.g. for SQA, WTQ or WikiSQL-supervised tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
307_5_45
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`TapasConfig`]): Model configuration class with all the parameters of the model.
307_5_46
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
and behavior. Parameters: config ([`TapasConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf> No docstring available for TFTapasModel Methods: call No docstring available for TFTapasForMaskedLM Methods: call
307_5_47
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
No docstring available for TFTapasModel Methods: call No docstring available for TFTapasForMaskedLM Methods: call No docstring available for TFTapasForSequenceClassification Methods: call No docstring available for TFTapasForQuestionAnswering Methods: call </tf> </frameworkcontent>
307_5_48
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
308_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
308_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#overview
.md
The LXMERT model was proposed in [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/abs/1908.07490) by Hao Tan & Mohit Bansal. It is a series of bidirectional transformer encoders (one for the vision modality, one for the language modality, and then one to fuse both modalities) pretrained using a combination of masked language modeling, visual-language text alignment, ROI-feature regression, masked
308_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#overview
.md
combination of masked language modeling, visual-language text alignment, ROI-feature regression, masked visual-attribute modeling, masked visual-object modeling, and visual-question answering objectives. The pretraining consists of multiple multi-modal datasets: MSCOCO, Visual-Genome + Visual-Genome Question Answering, VQA 2.0, and GQA. The abstract from the paper is the following: *Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly,
308_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#overview
.md
*Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language
308_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#overview
.md
build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pretraining tasks: masked language modeling, masked object prediction (feature regression and label classification),
308_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#overview
.md
pretraining tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our
308_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#overview
.md
results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR, and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results; and also present several
308_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#overview
.md
model components and pretraining strategies significantly contribute to our strong results; and also present several attention visualizations for the different encoders* This model was contributed by [eltoto1219](https://huggingface.co/eltoto1219). The original code can be found [here](https://github.com/airsplay/lxmert).
308_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#usage-tips
.md
- Bounding boxes are not necessary to be used in the visual feature embeddings, any kind of visual-spacial features will work. - Both the language hidden states and the visual hidden states that LXMERT outputs are passed through the cross-modality layer, so they contain information from both modalities. To access a modality that only attends to itself, select the vision/language hidden states from the first input in the tuple.
308_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#usage-tips
.md
itself, select the vision/language hidden states from the first input in the tuple. - The bidirectional cross-modality encoder attention only returns attention values when the language modality is used as the input and the vision modality is used as the context vector. Further, while the cross-modality encoder contains self-attention for each respective modality and cross-attention, only the cross attention is returned and both self attention outputs are disregarded.
308_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#resources
.md
- [Question answering task guide](../tasks/question_answering)
308_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
This is the configuration class to store the configuration of a [`LxmertModel`] or a [`TFLxmertModel`]. It is used to instantiate a LXMERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Lxmert [unc-nlp/lxmert-base-uncased](https://huggingface.co/unc-nlp/lxmert-base-uncased) architecture.
308_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
[unc-nlp/lxmert-base-uncased](https://huggingface.co/unc-nlp/lxmert-base-uncased) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the LXMERT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`LxmertModel`] or [`TFLxmertModel`].
308_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
`inputs_ids` passed when calling [`LxmertModel`] or [`TFLxmertModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. num_qa_labels (`int`, *optional*, defaults to 9500): This represents the total number of different question answering (QA) labels there are. If using more than
308_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
This represents the total number of different question answering (QA) labels there are. If using more than one dataset with QA, the user will need to account for the total number of labels that all of the datasets have in total. num_object_labels (`int`, *optional*, defaults to 1600): This represents the total number of semantically unique objects that lxmert will be able to classify a pooled-object feature as belonging too. num_attr_labels (`int`, *optional*, defaults to 400):
308_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
pooled-object feature as belonging too. num_attr_labels (`int`, *optional*, defaults to 400): This represents the total number of semantically unique attributes that lxmert will be able to classify a pooled-object feature as possessing. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
308_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities.
308_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the *token_type_ids* passed into [`BertModel`].
308_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the *token_type_ids* passed into [`BertModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. l_layers (`int`, *optional*, defaults to 9): Number of hidden layers in the Transformer language encoder.
308_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
l_layers (`int`, *optional*, defaults to 9): Number of hidden layers in the Transformer language encoder. x_layers (`int`, *optional*, defaults to 5): Number of hidden layers in the Transformer cross modality encoder. r_layers (`int`, *optional*, defaults to 5): Number of hidden layers in the Transformer visual encoder. visual_feat_dim (`int`, *optional*, defaults to 2048): This represents the last dimension of the pooled-object features used as input for the model, representing
308_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
This represents the last dimension of the pooled-object features used as input for the model, representing the size of each object feature itself. visual_pos_dim (`int`, *optional*, defaults to 4): This represents the number of spacial features that are mixed into the visual features. The default is set to 4 because most commonly this will represent the location of a bounding box. i.e., (x, y, width, height) visual_loss_normalizer (`float`, *optional*, defaults to 6.67):
308_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
visual_loss_normalizer (`float`, *optional*, defaults to 6.67): This represents the scaling factor in which each visual loss is multiplied by if during pretraining, one decided to train with multiple vision-based loss objectives. task_matched (`bool`, *optional*, defaults to `True`): This task is used for sentence-image matching. If the sentence correctly describes the image the label will be 1. If the sentence does not correctly describe the image, the label will be 0.
308_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
be 1. If the sentence does not correctly describe the image, the label will be 0. task_mask_lm (`bool`, *optional*, defaults to `True`): Whether or not to add masked language modeling (as used in pretraining models such as BERT) to the loss objective. task_obj_predict (`bool`, *optional*, defaults to `True`): Whether or not to add object prediction, attribute prediction and feature regression to the loss objective. task_qa (`bool`, *optional*, defaults to `True`):
308_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertconfig
.md
task_qa (`bool`, *optional*, defaults to `True`): Whether or not to add the question-answering loss to the objective visual_obj_loss (`bool`, *optional*, defaults to `True`): Whether or not to calculate the object-prediction loss objective visual_attr_loss (`bool`, *optional*, defaults to `True`): Whether or not to calculate the attribute-prediction loss objective visual_feat_loss (`bool`, *optional*, defaults to `True`): Whether or not to calculate the feature-regression loss objective
308_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizer
.md
Construct a Lxmert tokenizer. Based on WordPiece. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): File containing the vocabulary. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`):
308_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizer
.md
Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`): Whether or not to do basic tokenization before WordPiece. never_split (`Iterable`, *optional*): Collection of tokens which will never be split during tokenization. Only has an effect when `do_basic_tokenize=True` unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
308_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizer
.md
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`):
308_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizer
.md
token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
308_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizer
.md
instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
308_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizer
.md
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this [issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original Lxmert). clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
308_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizer
.md
value for `lowercase` (as in the original Lxmert). clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`): Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces.
308_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizerfast
.md
Construct a "fast" Lxmert tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece. This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): File containing the vocabulary. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. unk_token (`str`, *optional*, defaults to `"[UNK]"`):
308_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizerfast
.md
Whether or not to lowercase the input when tokenizing. unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last
308_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizerfast
.md
sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence
308_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizerfast
.md
The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. clean_text (`bool`, *optional*, defaults to `True`):
308_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizerfast
.md
modeling. This is the token which the model will try to predict. clean_text (`bool`, *optional*, defaults to `True`): Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this issue](https://github.com/huggingface/transformers/issues/328)).
308_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmerttokenizerfast
.md
issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original Lxmert). wordpieces_prefix (`str`, *optional*, defaults to `"##"`): The prefix for subwords.
308_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
models.lxmert.modeling_lxmert.LxmertModelOutput Lxmert's outputs that contain the last hidden states, pooled outputs, and attention probabilities for the language, visual, and, cross-modality encoders. (note: the visual encoder in Lxmert is referred to as the "relation-ship" encoder") Args: language_output (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the language encoder.
308_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
Sequence of hidden-states at the output of the last layer of the language encoder. vision_output (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the visual encoder. pooled_output (`torch.FloatTensor` of shape `(batch_size, hidden_size)`): Last layer hidden-state of the first token of the sequence (classification, CLS, token) further processed by a Linear layer and a Tanh activation function. The Linear
308_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
by a Linear layer and a Tanh activation function. The Linear language_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for input features + one for the output of each cross-modality layer) of shape `(batch_size, sequence_length, hidden_size)`.
308_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. vision_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for input features + one for the output of each cross-modality layer) of shape `(batch_size, sequence_length, hidden_size)`.
308_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. language_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
308_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
the self-attention heads. vision_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
308_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
the self-attention heads. cross_encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. models.lxmert.modeling_lxmert.LxmertForPreTrainingOutput
308_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
the self-attention heads. models.lxmert.modeling_lxmert.LxmertForPreTrainingOutput Output type of [`LxmertForPreTraining`]. Args: loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
308_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
(classification) loss. prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). cross_relationship_score (`torch.FloatTensor` of shape `(batch_size, 2)`): Prediction scores of the textual matching objective (classification) head (scores of True/False continuation before SoftMax). question_answering_score (`torch.FloatTensor` of shape `(batch_size, n_qa_answers)`):
308_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
continuation before SoftMax). question_answering_score (`torch.FloatTensor` of shape `(batch_size, n_qa_answers)`): Prediction scores of question answering objective (classification). language_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for input features + one for the output of each cross-modality layer) of shape `(batch_size, sequence_length, hidden_size)`.
308_7_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. vision_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for input features + one for the output of each cross-modality layer) of shape `(batch_size, sequence_length, hidden_size)`.
308_7_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. language_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
308_7_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
the self-attention heads. vision_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
308_7_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
the self-attention heads. cross_encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. models.lxmert.modeling_lxmert.LxmertForQuestionAnsweringOutput
308_7_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
the self-attention heads. models.lxmert.modeling_lxmert.LxmertForQuestionAnsweringOutput Output type of [`LxmertForQuestionAnswering`]. Args: loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss.k. question_answering_score (`torch.FloatTensor` of shape `(batch_size, n_qa_answers)`, *optional*):
308_7_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
(classification) loss.k. question_answering_score (`torch.FloatTensor` of shape `(batch_size, n_qa_answers)`, *optional*): Prediction scores of question answering objective (classification). language_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for input features + one for the output of each cross-modality layer) of shape `(batch_size, sequence_length, hidden_size)`.
308_7_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. vision_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for input features + one for the output of each cross-modality layer) of shape `(batch_size, sequence_length, hidden_size)`.
308_7_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. language_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
308_7_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
the self-attention heads. vision_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
308_7_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
the self-attention heads. cross_encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. [[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertModelOutput:
308_7_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
the self-attention heads. [[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertModelOutput: modeling_tf_lxmert requires the TensorFlow library but it was not found in your environment. However, we were able to find a PyTorch installation. PyTorch classes do not begin with "TF", but are otherwise identically named to our TF classes. If you want to use PyTorch, please use those classes instead! If you really do want to use TensorFlow, please follow the instructions on the
308_7_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
If you really do want to use TensorFlow, please follow the instructions on the installation page https://www.tensorflow.org/install that match your environment. [[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertForPreTrainingOutput: modeling_tf_lxmert requires the TensorFlow library but it was not found in your environment. However, we were able to find a PyTorch installation. PyTorch classes do not begin with "TF", but are otherwise identically named to our TF classes.
308_7_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmert-specific-outputs
.md
with "TF", but are otherwise identically named to our TF classes. If you want to use PyTorch, please use those classes instead! If you really do want to use TensorFlow, please follow the instructions on the installation page https://www.tensorflow.org/install that match your environment. <frameworkcontent> <pt>
308_7_22