source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 6):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 2048): | 371_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | decoder_ffn_dim (`int`, *optional*, defaults to 2048):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 2048):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"relu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported. | 371_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | `"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
init_std (`float`, *optional*, defaults to 0.02): | 371_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | The dropout ratio for activations inside the fully connected layer.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (`float`, *optional*, defaults to 1):
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
encoder_layerdrop (`float`, *optional*, defaults to 0.0): | 371_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
auxiliary_loss (`bool`, *optional*, defaults to `False`):
Whether auxiliary decoding losses (loss at each decoder layer) are to be used. | 371_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
position_embedding_type (`str`, *optional*, defaults to `"sine"`):
Type of position embeddings to be used on top of the image features. One of `"sine"` or `"learned"`.
backbone (`str`, *optional*, defaults to `"resnet50"`):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this | 371_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, defaults to `True`):
Whether to use pretrained weights for the backbone.
backbone_kwargs (`dict`, *optional*): | 371_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | Whether to use pretrained weights for the backbone.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
dilation (`bool`, *optional*, defaults to `False`):
Whether to replace stride with dilation in the last convolutional block (DC5). Only supported when
`use_timm_backbone` = `True`.
class_cost (`float`, *optional*, defaults to 1): | 371_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | `use_timm_backbone` = `True`.
class_cost (`float`, *optional*, defaults to 1):
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (`float`, *optional*, defaults to 5):
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (`float`, *optional*, defaults to 2):
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
mask_loss_coefficient (`float`, *optional*, defaults to 1): | 371_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | mask_loss_coefficient (`float`, *optional*, defaults to 1):
Relative weight of the Focal loss in the panoptic segmentation loss.
dice_loss_coefficient (`float`, *optional*, defaults to 1):
Relative weight of the DICE/F-1 loss in the panoptic segmentation loss.
bbox_loss_coefficient (`float`, *optional*, defaults to 5):
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (`float`, *optional*, defaults to 2): | 371_3_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | giou_loss_coefficient (`float`, *optional*, defaults to 2):
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (`float`, *optional*, defaults to 0.1):
Relative classification weight of the 'no-object' class in the object detection loss.
focal_alpha (`float`, *optional*, defaults to 0.25):
Alpha parameter in the focal loss.
Examples:
```python
>>> from transformers import ConditionalDetrConfig, ConditionalDetrModel | 371_3_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig | .md | >>> # Initializing a Conditional DETR microsoft/conditional-detr-resnet-50 style configuration
>>> configuration = ConditionalDetrConfig()
>>> # Initializing a model (with random weights) from the microsoft/conditional-detr-resnet-50 style configuration
>>> model = ConditionalDetrModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 371_3_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrimageprocessor | .md | Constructs a Conditional Detr image processor.
Args:
format (`str`, *optional*, defaults to `"coco_detection"`):
Data format of the annotations. One of "coco_detection" or "coco_panoptic".
do_resize (`bool`, *optional*, defaults to `True`):
Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be
overridden by the `do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`): | 371_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrimageprocessor | .md | size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`):
Size of the image's `(height, width)` dimensions after resizing. Can be overridden by the `size` parameter
in the `preprocess` method. Available options are:
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`.
Do NOT keep the aspect ratio.
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting | 371_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrimageprocessor | .md | - `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
the aspect ratio and keeping the shortest edge less or equal to `shortest_edge` and the longest edge
less or equal to `longest_edge`.
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`. | 371_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrimageprocessor | .md | aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method. | 371_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrimageprocessor | .md | `do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize:
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
`preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`): | 371_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrimageprocessor | .md | `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one | 371_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrimageprocessor | .md | Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_annotations (`bool`, *optional*, defaults to `True`):
Controls whether to convert the annotations to the format expected by the DETR model. Converts the
bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`. | 371_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrimageprocessor | .md | bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`.
Can be overridden by the `do_convert_annotations` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
method. If `True`, padding will be applied to the bottom and right of the image with zeros.
If `pad_size` is provided, the image will be padded to the specified dimensions. | 371_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrimageprocessor | .md | If `pad_size` is provided, the image will be padded to the specified dimensions.
Otherwise, the image will be padded to the maximum height and width of the batch.
pad_size (`Dict[str, int]`, *optional*):
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest
height and width in the batch.
Methods: preprocess
- post_process_object_detection | 371_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrimageprocessor | .md | height and width in the batch.
Methods: preprocess
- post_process_object_detection
- post_process_instance_segmentation
- post_process_semantic_segmentation
- post_process_panoptic_segmentation | 371_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrfeatureextractor | .md | No docstring available for ConditionalDetrFeatureExtractor
Methods: __call__
- post_process_object_detection
- post_process_instance_segmentation
- post_process_semantic_segmentation
- post_process_panoptic_segmentation | 371_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrmodel | .md | The bare Conditional DETR Model (consisting of a backbone and encoder-decoder Transformer) outputting raw
hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 371_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ConditionalDetrConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 371_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 371_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrforobjectdetection | .md | CONDITIONAL_DETR Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on
top, for tasks such as COCO detection.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 371_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrforobjectdetection | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ConditionalDetrConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 371_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrforobjectdetection | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 371_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrforsegmentation | .md | CONDITIONAL_DETR Model (consisting of a backbone and encoder-decoder Transformer) with a segmentation head on top,
for tasks such as COCO panoptic.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 371_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrforsegmentation | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ConditionalDetrConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 371_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md | https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrforsegmentation | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 371_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/ | .md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 372_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 372_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#overview | .md | The VisualBERT model was proposed in [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
VisualBERT is a neural network trained on a variety of (image, text) pairs.
The abstract from the paper is the following:
*We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. | 372_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#overview | .md | *We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks.
VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an
associated input image with self-attention. We further propose two visually-grounded language model objectives for
pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, | 372_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#overview | .md | pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2,
and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly
simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any
explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between | 372_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#overview | .md | explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between
verbs and image regions corresponding to their arguments.*
This model was contributed by [gchhablani](https://huggingface.co/gchhablani). The original code can be found [here](https://github.com/uclanlp/visualbert). | 372_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#usage-tips | .md | 1. Most of the checkpoints provided work with the [`VisualBertForPreTraining`] configuration. Other
checkpoints provided are the fine-tuned checkpoints for down-stream tasks - VQA ('visualbert-vqa'), VCR
('visualbert-vcr'), NLVR2 ('visualbert-nlvr2'). Hence, if you are not working on these downstream tasks, it is
recommended that you use the pretrained checkpoints.
2. For the VCR task, the authors use a fine-tuned detector for generating visual embeddings, for all the checkpoints. | 372_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#usage-tips | .md | 2. For the VCR task, the authors use a fine-tuned detector for generating visual embeddings, for all the checkpoints.
We do not provide the detector and its weights as a part of the package, but it will be available in the research
projects, and the states can be loaded directly into the detector provided.
VisualBERT is a multi-modal vision and language model. It can be used for visual question answering, multiple choice, | 372_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#usage-tips | .md | VisualBERT is a multi-modal vision and language model. It can be used for visual question answering, multiple choice,
visual reasoning and region-to-phrase correspondence tasks. VisualBERT uses a BERT-like transformer to prepare
embeddings for image-text pairs. Both the text and visual features are then projected to a latent space with identical
dimension.
To feed images to the model, each image is passed through a pre-trained object detector and the regions and the | 372_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#usage-tips | .md | dimension.
To feed images to the model, each image is passed through a pre-trained object detector and the regions and the
bounding boxes are extracted. The authors use the features generated after passing these regions through a pre-trained
CNN like ResNet as visual embeddings. They also add absolute position embeddings, and feed the resulting sequence of
vectors to a standard BERT model. The text input is concatenated in the front of the visual embeddings in the embedding | 372_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#usage-tips | .md | vectors to a standard BERT model. The text input is concatenated in the front of the visual embeddings in the embedding
layer, and is expected to be bound by [CLS] and a [SEP] tokens, as in BERT. The segment IDs must also be set
appropriately for the textual and visual parts.
The [`BertTokenizer`] is used to encode the text. A custom detector/image processor must be used
to get the visual embeddings. The following example notebooks show how to use VisualBERT with Detectron-like models: | 372_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#usage-tips | .md | to get the visual embeddings. The following example notebooks show how to use VisualBERT with Detectron-like models:
- [VisualBERT VQA demo notebook](https://github.com/huggingface/transformers/tree/main/examples/research_projects/visual_bert) : This notebook
contains an example on VisualBERT VQA.
- [Generate Embeddings for VisualBERT (Colab Notebook)](https://colab.research.google.com/drive/1bLGxKdldwqnMVA5x4neY7-l_8fKGWQYI?usp=sharing) : This notebook contains | 372_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#usage-tips | .md | an example on how to generate visual embeddings.
The following example shows how to get the last hidden state using [`VisualBertModel`]:
```python
>>> import torch
>>> from transformers import BertTokenizer, VisualBertModel | 372_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#usage-tips | .md | >>> model = VisualBertModel.from_pretrained("uclanlp/visualbert-vqa-coco-pre")
>>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("What is the man eating?", return_tensors="pt")
>>> # this is a custom function that returns the visual embeddings given the image path
>>> visual_embeds = get_visual_embeddings(image_path) | 372_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#usage-tips | .md | >>> visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long)
>>> visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
>>> inputs.update(
... {
... "visual_embeds": visual_embeds,
... "visual_token_type_ids": visual_token_type_ids,
... "visual_attention_mask": visual_attention_mask,
... }
... )
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
``` | 372_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | This is the configuration class to store the configuration of a [`VisualBertModel`]. It is used to instantiate an
VisualBERT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the VisualBERT
[uclanlp/visualbert-vqa-coco-pre](https://huggingface.co/uclanlp/visualbert-vqa-coco-pre) architecture. | 372_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | [uclanlp/visualbert-vqa-coco-pre](https://huggingface.co/uclanlp/visualbert-vqa-coco-pre) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the VisualBERT model. Defines the number of different tokens that can be represented by | 372_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | Vocabulary size of the VisualBERT model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`VisualBertModel`]. Vocabulary size of the model. Defines the
different tokens that can be represented by the `inputs_ids` passed to the forward method of
[`VisualBertModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
visual_embedding_dim (`int`, *optional*, defaults to 512): | 372_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | Dimensionality of the encoder layers and the pooler layer.
visual_embedding_dim (`int`, *optional*, defaults to 512):
Dimensionality of the visual embeddings to be passed to the model.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072): | 372_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1): | 372_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | `"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large | 372_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`VisualBertModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12): | 372_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
bypass_transformer (`bool`, *optional*, defaults to `False`):
Whether or not the model should bypass the transformer for the visual embeddings. If set to `True`, the
model directly concatenates the visual embeddings from [`VisualBertEmbeddings`] with text output from
transformers, and then pass it to a self-attention layer.
special_visual_initialize (`bool`, *optional*, defaults to `True`): | 372_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | transformers, and then pass it to a self-attention layer.
special_visual_initialize (`bool`, *optional*, defaults to `True`):
Whether or not the visual token type and position type embedding weights should be initialized the same as
the textual token type and positive type embeddings. When set to `True`, the weights of the textual token
type and position type embeddings are copied to the respective visual embedding layers.
Example:
```python | 372_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | type and position type embeddings are copied to the respective visual embedding layers.
Example:
```python
>>> from transformers import VisualBertConfig, VisualBertModel | 372_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertconfig | .md | >>> # Initializing a VisualBERT visualbert-vqa-coco-pre style configuration
>>> configuration = VisualBertConfig.from_pretrained("uclanlp/visualbert-vqa-coco-pre")
>>> # Initializing a model (with random weights) from the visualbert-vqa-coco-pre style configuration
>>> model = VisualBertModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 372_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertmodel | .md | The bare VisualBert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 372_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`VisualBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 372_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertmodel | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
The model can behave as an encoder (with only self-attention) following the architecture described in [Attention is
all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
Methods: forward | 372_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforpretraining | .md | VisualBert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a
`sentence-image prediction (classification)` head.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 372_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforpretraining | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`VisualBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 372_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforpretraining | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 372_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforquestionanswering | .md | VisualBert Model with a classification/regression head on top (a dropout and a linear layer on top of the pooled
output) for VQA.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 372_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforquestionanswering | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`VisualBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 372_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforquestionanswering | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 372_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertformultiplechoice | .md | VisualBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for VCR tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 372_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertformultiplechoice | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`VisualBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 372_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertformultiplechoice | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 372_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforvisualreasoning | .md | VisualBert Model with a sequence classification head on top (a dropout and a linear layer on top of the pooled
output) for Visual Reasoning e.g. for NLVR task.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 372_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforvisualreasoning | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`VisualBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 372_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforvisualreasoning | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 372_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforregiontophrasealignment | .md | VisualBert Model with a Masked Language Modeling head and an attention layer on top for Region-to-Phrase Alignment
e.g. for Flickr30 Entities task.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 372_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforregiontophrasealignment | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`VisualBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 372_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/visual_bert.md | https://huggingface.co/docs/transformers/en/model_doc/visual_bert/#visualbertforregiontophrasealignment | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 372_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/ | .md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 373_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 373_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#overview | .md | The BigBird model was proposed in [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse | 373_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#overview | .md | based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context, | 373_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#overview | .md | computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.
The abstract from the paper is the following:
*Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. | 373_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#overview | .md | *Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and | 373_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#overview | .md | reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to | 373_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#overview | .md | sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.*
The original code can be found [here](https://github.com/google-research/bigbird). | 373_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#usage-tips | .md | - For an in-detail explanation on how BigBird's attention works, see [this blog post](https://huggingface.co/blog/big-bird).
- BigBird comes with 2 implementations: **original_full** & **block_sparse**. For the sequence length < 1024, using
**original_full** is advised as there is no benefit in using **block_sparse** attention.
- The code currently uses window size of 3 blocks and 2 global blocks.
- Sequence length must be divisible by block size.
- Current implementation supports only **ITC**. | 373_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#usage-tips | .md | - Sequence length must be divisible by block size.
- Current implementation supports only **ITC**.
- Current implementation doesn't support **num_random_blocks = 0**.
- BigBirdPegasus uses the [PegasusTokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pegasus/tokenization_pegasus.py).
- BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left. | 373_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#resources | .md | - [Text classification task guide](../tasks/sequence_classification)
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization) | 373_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | This is the configuration class to store the configuration of a [`BigBirdPegasusModel`]. It is used to instantiate
an BigBirdPegasus model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the BigBirdPegasus
[google/bigbird-pegasus-large-arxiv](https://huggingface.co/google/bigbird-pegasus-large-arxiv) architecture. | 373_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | [google/bigbird-pegasus-large-arxiv](https://huggingface.co/google/bigbird-pegasus-large-arxiv) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 96103):
Vocabulary size of the BigBirdPegasus model. Defines the number of different tokens that can be represented | 373_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | Vocabulary size of the BigBirdPegasus model. Defines the number of different tokens that can be represented
by the `inputs_ids` passed when calling [`BigBirdPegasusModel`].
d_model (`int`, *optional*, defaults to 1024):
Dimension of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 16):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 16):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16): | 373_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 4096): | 373_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu_new"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. | 373_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for classifier.
max_position_embeddings (`int`, *optional*, defaults to 4096): | 373_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | The dropout ratio for classifier.
max_position_embeddings (`int`, *optional*, defaults to 4096):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 1024 or 2048 or 4096).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (`float`, *optional*, defaults to 0.0): | 373_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (`bool`, *optional*, defaults to `True`): | 373_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | for more details.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
attention_type (`str`, *optional*, defaults to `"block_sparse"`)
Whether to use block sparse attention (with n complexity) as introduced in paper or original attention
layer (with n^2 complexity) in encoder. Possible values are `"original_full"` and `"block_sparse"`.
use_bias (`bool`, *optional*, defaults to `False`) | 373_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | use_bias (`bool`, *optional*, defaults to `False`)
Whether to use bias in query, key, value.
block_size (`int`, *optional*, defaults to 64)
Size of each block. Useful only when `attention_type == "block_sparse"`.
num_random_blocks (`int`, *optional*, defaults to 3)
Each query is going to attend these many number of random blocks. Useful only when `attention_type ==
"block_sparse"`.
scale_embeddings (`bool`, *optional*, defaults to `True`)
Whether to rescale embeddings with (hidden_size ** 0.5). | 373_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | scale_embeddings (`bool`, *optional*, defaults to `True`)
Whether to rescale embeddings with (hidden_size ** 0.5).
Example:
```python
>>> from transformers import BigBirdPegasusConfig, BigBirdPegasusModel | 373_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusconfig | .md | >>> # Initializing a BigBirdPegasus bigbird-pegasus-base style configuration
>>> configuration = BigBirdPegasusConfig()
>>> # Initializing a model (with random weights) from the bigbird-pegasus-base style configuration
>>> model = BigBirdPegasusModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
Methods: all | 373_4_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bigbird_pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/bigbird_pegasus/#bigbirdpegasusmodel | .md | The bare BigBirdPegasus Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 373_5_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.