source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | - Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other down stream tasks.
- The PyTorch version of this model is only available in torch 1.10 and higher. | 394_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerconfig | .md | This is the configuration class to store the configuration of a [`BridgeTowerModel`]. It is used to instantiate a
BridgeTower model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the bridgetower-base
[BridgeTower/bridgetower-base](https://huggingface.co/BridgeTower/bridgetower-base/) architecture. | 394_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerconfig | .md | [BridgeTower/bridgetower-base](https://huggingface.co/BridgeTower/bridgetower-base/) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
share_cross_modal_transformer_layers (`bool`, *optional*, defaults to `True`):
Whether cross modal transformer layers are shared.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): | 394_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerconfig | .md | Whether cross modal transformer layers are shared.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler.
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
initializer_factor (`float`, *optional*, defaults to 1):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing). | 394_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerconfig | .md | A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
share_link_tower_layers (`bool`, *optional*, defaults to `False`):
Whether the bride/link tower layers are shared.
link_tower_type (`str`, *optional*, defaults to `"add"`):
Type of the bridge/link layer.
num_attention_heads (`int`, *optional*, defaults to 12): | 394_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerconfig | .md | Type of the bridge/link layer.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
num_hidden_layers (`int`, *optional*, defaults to 6):
Number of hidden layers in the Transformer encoder.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie input and output embeddings.
init_layernorm_from_vision_encoder (`bool`, *optional*, defaults to `False`):
Whether to init LayerNorm from the vision encoder. | 394_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerconfig | .md | Whether to init LayerNorm from the vision encoder.
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`BridgeTowerTextConfig`].
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`BridgeTowerVisionConfig`].
Example:
```python
>>> from transformers import BridgeTowerModel, BridgeTowerConfig | 394_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerconfig | .md | >>> # Initializing a BridgeTower BridgeTower/bridgetower-base style configuration
>>> configuration = BridgeTowerConfig()
>>> # Initializing a model from the BridgeTower/bridgetower-base style configuration
>>> model = BridgeTowerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 394_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowertextconfig | .md | This is the configuration class to store the text configuration of a [`BridgeTowerModel`]. The default values here
are copied from RoBERTa. Instantiating a configuration with the defaults will yield a similar configuration to that
of the bridgetower-base [BridegTower/bridgetower-base](https://huggingface.co/BridgeTower/bridgetower-base/)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 394_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowertextconfig | .md | architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the text part of the model. Defines the number of different tokens that can be
represented by the `inputs_ids` passed when calling [`BridgeTowerModel`].
hidden_size (`int`, *optional*, defaults to 768): | 394_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowertextconfig | .md | represented by the `inputs_ids` passed when calling [`BridgeTowerModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072): | 394_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowertextconfig | .md | intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1): | 394_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowertextconfig | .md | `"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 514):
The maximum sequence length that this model might ever be used with. Typically set this to something large | 394_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowertextconfig | .md | The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids`.
initializer_factor (`float`, *optional*, defaults to 1):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
layer_norm_eps (`float`, *optional*, defaults to 1e-05): | 394_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowertextconfig | .md | testing).
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). | 394_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowertextconfig | .md | [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
is_decoder (`bool`, *optional*, defaults to `False`):
Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
use_cache (`bool`, *optional*, defaults to `True`): | 394_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowertextconfig | .md | use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
Example:
```python
>>> from transformers import BridgeTowerTextConfig | 394_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowertextconfig | .md | >>> # Initializing a BridgeTower BridgeTower/bridgetower-base style configuration for the text model
>>> configuration = BridgeTowerTextConfig()
>>> # Accessing the configuration
>>> configuration
``` | 394_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowervisionconfig | .md | This is the configuration class to store the vision configuration of a [`BridgeTowerModel`]. Instantiating a
configuration with the defaults will yield a similar configuration to that of the bridgetower-base
[BridgeTower/bridgetower-base](https://huggingface.co/BridgeTower/bridgetower-base/) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args: | 394_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowervisionconfig | .md | documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in visual encoder model.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
image_size (`int`, *optional*, defaults to 288):
The size (resolution) of each image.
initializer_factor (`float`, *optional*, defaults to 1): | 394_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowervisionconfig | .md | The size (resolution) of each image.
initializer_factor (`float`, *optional*, defaults to 1):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
stop_gradient (`bool`, *optional*, defaults to `False`):
Whether to stop gradient for training.
share_layernorm (`bool`, *optional*, defaults to `True`):
Whether LayerNorm layers are shared. | 394_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowervisionconfig | .md | share_layernorm (`bool`, *optional*, defaults to `True`):
Whether LayerNorm layers are shared.
remove_last_layer (`bool`, *optional*, defaults to `False`):
Whether to remove the last layer from the vision encoder.
Example:
```python
>>> from transformers import BridgeTowerVisionConfig | 394_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowervisionconfig | .md | >>> # Initializing a BridgeTower BridgeTower/bridgetower-base style configuration for the vision model
>>> configuration = BridgeTowerVisionConfig()
>>> # Accessing the configuration
>>> configuration
``` | 394_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerimageprocessor | .md | Constructs a BridgeTower image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the
`do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{'shortest_edge': 288}`):
Resize the shorter side of the input to `size["shortest_edge"]`. The longer side will be limited to under | 394_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerimageprocessor | .md | Resize the shorter side of the input to `size["shortest_edge"]`. The longer side will be limited to under
`int((1333 / 800) * size["shortest_edge"])` while preserving the aspect ratio. Only has an effect if
`do_resize` is set to `True`. Can be overridden by the `size` parameter in the `preprocess` method.
size_divisor (`int`, *optional*, defaults to 32):
The size by which to make sure both the height and width can be divided. Only has an effect if `do_resize` | 394_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerimageprocessor | .md | The size by which to make sure both the height and width can be divided. Only has an effect if `do_resize`
is set to `True`. Can be overridden by the `size_divisor` parameter in the `preprocess` method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Resampling filter to use if resizing the image. Only has an effect if `do_resize` is set to `True`. Can be
overridden by the `resample` parameter in the `preprocess` method.
do_rescale (`bool`, *optional*, defaults to `True`): | 394_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerimageprocessor | .md | overridden by the `resample` parameter in the `preprocess` method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale`
parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Only has an effect if `do_rescale` is set to `True`. Can be
overridden by the `rescale_factor` parameter in the `preprocess` method. | 394_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerimageprocessor | .md | overridden by the `rescale_factor` parameter in the `preprocess` method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
method. Can be overridden by the `do_normalize` parameter in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of | 394_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerimageprocessor | .md | Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. Can be
overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the | 394_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerimageprocessor | .md | Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`):
Whether to center crop the image. Can be overridden by the `do_center_crop` parameter in the `preprocess`
method.
crop_size (`Dict[str, int]`, *optional*): | 394_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerimageprocessor | .md | method.
crop_size (`Dict[str, int]`, *optional*):
Desired output size when applying center-cropping. Only has an effect if `do_center_crop` is set to `True`.
Can be overridden by the `crop_size` parameter in the `preprocess` method. If unset defaults to `size`,
do_pad (`bool`, *optional*, defaults to `True`):
Whether to pad the image to the `(max_height, max_width)` of the images in the batch. Can be overridden by
the `do_pad` parameter in the `preprocess` method.
Methods: preprocess | 394_6_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerprocessor | .md | Constructs a BridgeTower processor which wraps a Roberta tokenizer and BridgeTower image processor into a single
processor.
[`BridgeTowerProcessor`] offers all the functionalities of [`BridgeTowerImageProcessor`] and
[`RobertaTokenizerFast`]. See the docstring of [`~BridgeTowerProcessor.__call__`] and
[`~BridgeTowerProcessor.decode`] for more information.
Args:
image_processor (`BridgeTowerImageProcessor`):
An instance of [`BridgeTowerImageProcessor`]. The image processor is a required input. | 394_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerprocessor | .md | An instance of [`BridgeTowerImageProcessor`]. The image processor is a required input.
tokenizer (`RobertaTokenizerFast`):
An instance of ['RobertaTokenizerFast`]. The tokenizer is a required input.
Methods: __call__ | 394_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowermodel | .md | The bare BridgeTower Model transformer outputting BridgeTowerModelOutput object without any specific head on top.
This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`BridgeTowerConfig`]): Model configuration class with all the parameters of the model. | 394_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowermodel | .md | behavior.
Parameters:
config ([`BridgeTowerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 394_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerforcontrastivelearning | .md | BridgeTower Model with a image-text contrastive head on top computing image-text contrastive loss.
This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`BridgeTowerConfig`]): Model configuration class with all the parameters of the model. | 394_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerforcontrastivelearning | .md | behavior.
Parameters:
config ([`BridgeTowerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 394_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerformaskedlm | .md | BridgeTower Model with a language modeling head on top as done during pretraining.
This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`BridgeTowerConfig`]): Model configuration class with all the parameters of the model. | 394_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerformaskedlm | .md | behavior.
Parameters:
config ([`BridgeTowerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 394_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerforimageandtextretrieval | .md | BridgeTower Model transformer with a classifier head on top (a linear layer on top of the final hidden state of the
[CLS] token) for image-to-text matching.
This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`BridgeTowerConfig`]): Model configuration class with all the parameters of the model. | 394_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#bridgetowerforimageandtextretrieval | .md | behavior.
Parameters:
config ([`BridgeTowerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 394_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 395_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 395_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bart | .md | <div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=bart">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-bart-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/bart-large-mnli">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div> | 395_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#overview | .md | The Bart model was proposed in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation,
Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019.
According to the abstract,
- Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a
left-to-right decoder (like GPT). | 395_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#overview | .md | left-to-right decoder (like GPT).
- The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme,
where spans of text are replaced with a single mask token.
- BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It
matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new | 395_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#overview | .md | matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new
state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains
of up to 6 ROUGE.
This model was contributed by [sshleifer](https://huggingface.co/sshleifer). The authors' code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/bart). | 395_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#usage-tips | .md | - BART is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
- Sequence-to-sequence model with an encoder and a decoder. Encoder is fed a corrupted version of the tokens, decoder is fed the original tokens (but has a mask to hide the future words like a regular transformers decoder). A composition of the following transformations are applied on the pretraining tasks for the encoder:
* mask random tokens (like in BERT) | 395_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#usage-tips | .md | * mask random tokens (like in BERT)
* delete random tokens
* mask a span of k tokens with a single mask token (a span of 0 tokens is an insertion of a mask token)
* permute sentences
* rotate the document to make it start at a specific token | 395_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#implementation-notes | .md | - Bart doesn't use `token_type_ids` for sequence classification. Use [`BartTokenizer`] or
[`~BartTokenizer.encode`] to get the proper splitting.
- The forward pass of [`BartModel`] will create the `decoder_input_ids` if they are not passed.
This is different than some other modeling APIs. A typical use case of this feature is mask filling.
- Model predictions are intended to be identical to the original implementation when
`forced_bos_token_id=0`. This only works, however, if the string you pass to | 395_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#implementation-notes | .md | `forced_bos_token_id=0`. This only works, however, if the string you pass to
[`fairseq.encode`] starts with a space.
- [`~generation.GenerationMixin.generate`] should be used for conditional generation tasks like
summarization, see the example in that docstrings.
- Models that load the *facebook/bart-large-cnn* weights will not have a `mask_token_id`, or be able to perform
mask-filling tasks. | 395_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#mask-filling | .md | The `facebook/bart-base` and `facebook/bart-large` checkpoints can be used to fill multi-token masks.
```python
from transformers import BartForConditionalGeneration, BartTokenizer | 395_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#mask-filling | .md | model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors="pt")
generated_ids = model.generate(batch["input_ids"])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [
"UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria"
]
``` | 395_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BART. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
<PipelineTag pipeline="summarization"/> | 395_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | <PipelineTag pipeline="summarization"/>
- A blog post on [Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker](https://huggingface.co/blog/sagemaker-distributed-training-seq2seq).
- A notebook on how to [finetune BART for summarization with fastai using blurr](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb). 🌎 | 395_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | - A notebook on how to [finetune BART for summarization in two languages with Trainer class](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb). 🌎
- [`BartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb). | 395_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | - [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb).
- [`FlaxBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/summarization). | 395_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | - An example of how to train [`BartForConditionalGeneration`] with a Hugging Face `datasets` object can be found in this [forum discussion](https://discuss.huggingface.co/t/train-bart-for-conditional-generation-e-g-summarization/1904)
- [Summarization](https://huggingface.co/course/chapter7/5?fw=pt#summarization) chapter of the 🤗 Hugging Face course.
- [Summarization task guide](../tasks/summarization)
<PipelineTag pipeline="fill-mask"/> | 395_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | - [Summarization task guide](../tasks/summarization)
<PipelineTag pipeline="fill-mask"/>
- [`BartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). | 395_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | - [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). | 395_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | - [`FlaxBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb).
- [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. | 395_6_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course.
- [Masked language modeling task guide](../tasks/masked_language_modeling)
<PipelineTag pipeline="translation"/>
- A notebook on how to [finetune mBART using Seq2SeqTrainer for Hindi to English translation](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb). 🌎 | 395_6_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | - [`BartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb). | 395_6_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | - [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb).
- [Translation task guide](../tasks/translation)
See also:
- [Text classification task guide](../tasks/sequence_classification)
- [Question answering task guide](../tasks/question_answering) | 395_6_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#resources | .md | - [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Distilled checkpoints](https://huggingface.co/models?search=distilbart) are described in this [paper](https://arxiv.org/abs/2010.13002). | 395_6_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | This is the configuration class to store the configuration of a [`BartModel`]. It is used to instantiate a BART
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the BART
[facebook/bart-large](https://huggingface.co/facebook/bart-large) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 395_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the BART model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`BartModel`] or [`TFBartModel`].
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer. | 395_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 12):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16): | 395_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): | 395_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities. | 395_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for classifier.
max_position_embeddings (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large | 395_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details. | 395_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (`bool`, *optional*, defaults to `False`):
Scale embeddings by diving by sqrt(d_model).
use_cache (`bool`, *optional*, defaults to `True`): | 395_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | Scale embeddings by diving by sqrt(d_model).
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
num_labels (`int`, *optional*, defaults to 3):
The number of labels to use in [`BartForSequenceClassification`].
forced_eos_token_id (`int`, *optional*, defaults to 2):
The id of the token to force as the last generated token when `max_length` is reached. Usually set to
`eos_token_id`.
Example:
```python | 395_7_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | `eos_token_id`.
Example:
```python
>>> from transformers import BartConfig, BartModel | 395_7_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartconfig | .md | >>> # Initializing a BART facebook/bart-large style configuration
>>> configuration = BartConfig()
>>> # Initializing a model (with random weights) from the facebook/bart-large style configuration
>>> model = BartModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
Methods: all | 395_7_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizer | .md | Constructs a BART tokenizer, which is smilar to the ROBERTa tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import BartTokenizer
>>> tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2] | 395_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizer | .md | >>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip> | 395_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizer | .md | When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip>
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See | 395_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizer | .md | errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of | 395_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizer | .md | <Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`): | 395_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizer | .md | The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`): | 395_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizer | .md | token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 395_8_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizer | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 395_8_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizer | .md | modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (BART tokenizer detect beginning of words by the preceding space).
Methods: all | 395_8_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizerfast | .md | Construct a "fast" BART tokenizer (backed by HuggingFace's *tokenizers* library), derived from the GPT-2 tokenizer,
using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import BartTokenizerFast | 395_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizerfast | .md | >>> tokenizer = BartTokenizerFast.from_pretrained("facebook/bart-base")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2] | 395_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizerfast | .md | >>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip> | 395_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizerfast | .md | When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip>
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`): | 395_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizerfast | .md | Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip> | 395_9_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizerfast | .md | The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`. | 395_9_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizerfast | .md | The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`): | 395_9_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizerfast | .md | token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 395_9_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizerfast | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 395_9_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#barttokenizerfast | .md | modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (BART tokenizer detect beginning of words by the preceding space).
trim_offsets (`bool`, *optional*, defaults to `True`):
Whether the post processing step should trim offsets to avoid including whitespaces.
Methods: all
<frameworkcontent>
<pt> | 395_9_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartmodel | .md | The bare BART Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 395_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BartConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 395_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 395_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforconditionalgeneration | .md | The BART Model with a language modeling head. Can be used for summarization.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 395_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforconditionalgeneration | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BartConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 395_11_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.