source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertconfig | .md | hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. | 383_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertconfig | .md | Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. | 383_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertconfig | .md | The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2): | 383_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertconfig | .md | just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`IBertModel`]
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`): | 383_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertconfig | .md | The epsilon used by the layer normalization layers.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). | 383_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertconfig | .md | [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
quant_mode (`bool`, *optional*, defaults to `False`):
Whether to quantize the model or not.
force_dequant (`str`, *optional*, defaults to `"none"`): | 383_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertconfig | .md | Whether to quantize the model or not.
force_dequant (`str`, *optional*, defaults to `"none"`):
Force dequantize specific nonlinear layer. Dequatized layers are then executed with full precision.
`"none"`, `"gelu"`, `"softmax"`, `"layernorm"` and `"nonlinear"` are supported. As deafult, it is set as
`"none"`, which does not dequantize any layers. Please specify `"gelu"`, `"softmax"`, or `"layernorm"` to
dequantize GELU, Softmax, or LayerNorm, respectively. `"nonlinear"` will dequantize all nonlinear layers, | 383_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertconfig | .md | dequantize GELU, Softmax, or LayerNorm, respectively. `"nonlinear"` will dequantize all nonlinear layers,
i.e., GELU, Softmax, and LayerNorm. | 383_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertmodel | .md | The bare I-BERT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 383_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`IBertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the | 383_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertmodel | .md | model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in [Attention is | 383_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertmodel | .md | cross-attention is added between the self-attention layers, following the architecture described in [Attention is
all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
Methods: forward | 383_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertformaskedlm | .md | I-BERT Model with a `language modeling` head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 383_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertformaskedlm | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`IBertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the | 383_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertformaskedlm | .md | model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 383_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertforsequenceclassification | .md | I-BERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 383_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertforsequenceclassification | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`IBertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the | 383_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertforsequenceclassification | .md | model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 383_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertformultiplechoice | .md | I-BERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 383_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertformultiplechoice | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`IBertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the | 383_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertformultiplechoice | .md | model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 383_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertfortokenclassification | .md | I-BERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 383_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertfortokenclassification | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`IBertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the | 383_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertfortokenclassification | .md | model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 383_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertforquestionanswering | .md | I-BERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 383_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertforquestionanswering | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`IBertConfig`]): Model configuration class with all the parameters of the | 383_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md | https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertforquestionanswering | .md | and behavior.
Parameters:
config ([`IBertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 383_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 384_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 384_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#overview | .md | The Decision Transformer model was proposed in [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345)
by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
The abstract from the paper is the following:
*We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. | 384_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#overview | .md | *We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem.
This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances
in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that
casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or | 384_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#overview | .md | casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or
compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked
Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our
Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, | 384_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#overview | .md | Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity,
Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on
Atari, OpenAI Gym, and Key-to-Door tasks.*
This version of the model is for tasks where the state is a vector.
This model was contributed by [edbeeching](https://huggingface.co/edbeeching). The original code can be found [here](https://github.com/kzl/decision-transformer). | 384_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformerconfig | .md | This is the configuration class to store the configuration of a [`DecisionTransformerModel`]. It is used to
instantiate a Decision Transformer model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the standard
DecisionTransformer architecture. Many of the config options are used to instatiate the GPT2 model that is used as
part of the architecture. | 384_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformerconfig | .md | part of the architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
state_dim (`int`, *optional*, defaults to 17):
The state size for the RL environment
act_dim (`int`, *optional*, defaults to 4):
The size of the output action space
hidden_size (`int`, *optional*, defaults to 128):
The size of the hidden layers
max_ep_len (`int`, *optional*, defaults to 4096): | 384_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformerconfig | .md | The size of the hidden layers
max_ep_len (`int`, *optional*, defaults to 4096):
The maximum length of an episode in the environment
action_tanh (`bool`, *optional*, defaults to True):
Whether to use a tanh activation on action prediction
vocab_size (`int`, *optional*, defaults to 50257):
Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`DecisionTransformerModel`].
n_positions (`int`, *optional*, defaults to 1024): | 384_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformerconfig | .md | `inputs_ids` passed when calling [`DecisionTransformerModel`].
n_positions (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_layer (`int`, *optional*, defaults to 3):
Number of hidden layers in the Transformer encoder.
n_head (`int`, *optional*, defaults to 1):
Number of attention heads for each attention layer in the Transformer encoder.
n_inner (`int`, *optional*): | 384_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformerconfig | .md | Number of attention heads for each attention layer in the Transformer encoder.
n_inner (`int`, *optional*):
Dimensionality of the inner feed-forward layers. If unset, will default to 4 times `n_embd`.
activation_function (`str`, *optional*, defaults to `"gelu"`):
Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`.
resid_pdrop (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. | 384_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformerconfig | .md | The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (`int`, *optional*, defaults to 0.1):
The dropout ratio for the embeddings.
attn_pdrop (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
The epsilon to use in the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02): | 384_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformerconfig | .md | The epsilon to use in the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_attn_weights (`bool`, *optional*, defaults to `True`):
Scale attention weights by dividing by sqrt(hidden_size)..
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). | 384_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformerconfig | .md | Whether or not the model should return the last key/values attentions (not used by all models).
scale_attn_by_inverse_layer_idx (`bool`, *optional*, defaults to `False`):
Whether to additionally scale attention weights by `1 / layer_idx + 1`.
reorder_and_upcast_attn (`bool`, *optional*, defaults to `False`):
Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention
dot-product/softmax to float() when training with mixed precision.
Example:
```python | 384_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformerconfig | .md | dot-product/softmax to float() when training with mixed precision.
Example:
```python
>>> from transformers import DecisionTransformerConfig, DecisionTransformerModel | 384_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformerconfig | .md | >>> # Initializing a DecisionTransformer configuration
>>> configuration = DecisionTransformerConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = DecisionTransformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 384_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformergpt2model | .md | No docstring available for DecisionTransformerGPT2Model
Methods: forward | 384_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformermodel | .md | The Decision Transformer Model
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~DecisionTransformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 384_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/decision_transformer.md | https://huggingface.co/docs/transformers/en/model_doc/decision_transformer/#decisiontransformermodel | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
The model builds upon the GPT2 architecture to perform autoregressive prediction of actions in an offline RL
setting. Refer to the paper for more details: https://arxiv.org/abs/2106.01345
Methods: forward | 384_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 385_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 385_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasus | .md | <div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=pegasus">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-pegasus-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/pegasus_paraphrase">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div> | 385_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#overview | .md | The Pegasus model was proposed in [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019.
According to the abstract,
- Pegasus' pretraining task is intentionally similar to summarization: important sentences are removed/masked from an
input document and are generated together as one output sequence from the remaining sentences, similar to an
extractive summary. | 385_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#overview | .md | extractive summary.
- Pegasus achieves SOTA summarization performance on all 12 downstream tasks, as measured by ROUGE and human eval.
This model was contributed by [sshleifer](https://huggingface.co/sshleifer). The Authors' code can be found [here](https://github.com/google-research/pegasus). | 385_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#usage-tips | .md | - Sequence-to-sequence model with the same encoder-decoder model architecture as BART. Pegasus is pre-trained jointly on two self-supervised objective functions: Masked Language Modeling (MLM) and a novel summarization specific pretraining objective, called Gap Sentence Generation (GSG).
* MLM: encoder input tokens are randomly replaced by a mask tokens and have to be predicted by the encoder (like in BERT) | 385_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#usage-tips | .md | * MLM: encoder input tokens are randomly replaced by a mask tokens and have to be predicted by the encoder (like in BERT)
* GSG: whole encoder input sentences are replaced by a second mask token and fed to the decoder, but which has a causal mask to hide the future words like a regular auto-regressive transformer decoder.
- FP16 is not supported (help/ideas on this appreciated!).
- The adafactor optimizer is recommended for pegasus fine-tuning. | 385_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#checkpoints | .md | All the [checkpoints](https://huggingface.co/models?search=pegasus) are fine-tuned for summarization, besides
*pegasus-large*, whence the other checkpoints are fine-tuned:
- Each checkpoint is 2.2 GB on disk and 568M parameters.
- FP16 is not supported (help/ideas on this appreciated!).
- Summarizing xsum in fp32 takes about 400ms/sample, with default parameters on a v100 GPU. | 385_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#checkpoints | .md | - Summarizing xsum in fp32 takes about 400ms/sample, with default parameters on a v100 GPU.
- Full replication results and correctly pre-processed data can be found in this [Issue](https://github.com/huggingface/transformers/issues/6844#issue-689259666).
- [Distilled checkpoints](https://huggingface.co/models?search=distill-pegasus) are described in this [paper](https://arxiv.org/abs/2010.13002). | 385_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#implementation-notes | .md | - All models are transformer encoder-decoders with 16 layers in each component.
- The implementation is completely inherited from [`BartForConditionalGeneration`]
- Some key configuration differences:
- static, sinusoidal position embeddings
- the model starts generating with pad_token_id (which has 0 token_embedding) as the prefix.
- more beams are used (`num_beams=8`)
- All pretrained pegasus checkpoints are the same besides three attributes: `tokenizer.model_max_length` (maximum | 385_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#implementation-notes | .md | - All pretrained pegasus checkpoints are the same besides three attributes: `tokenizer.model_max_length` (maximum
input size), `max_length` (the maximum number of tokens to generate) and `length_penalty`.
- The code to convert checkpoints trained in the author's [repo](https://github.com/google-research/pegasus) can be
found in `convert_pegasus_tf_to_pytorch.py`. | 385_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#usage-example | .md | ```python
>>> from transformers import PegasusForConditionalGeneration, PegasusTokenizer
>>> import torch
>>> src_text = [
... """ PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
... ] | 385_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#usage-example | .md | ... model_name = "google/pegasus-xsum"
... device = "cuda" if torch.cuda.is_available() else "cpu"
... tokenizer = PegasusTokenizer.from_pretrained(model_name)
... model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
... batch = tokenizer(src_text, truncation=True, padding="longest", return_tensors="pt").to(device)
... translated = model.generate(**batch)
... tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
... assert (
... tgt_text[0] | 385_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#usage-example | .md | ... tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
... assert (
... tgt_text[0]
... == "California's largest electricity provider has turned off power to hundreds of thousands of customers."
... )
``` | 385_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#resources | .md | - [Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/seq2seq-distillation/finetune_pegasus_xsum.sh) to fine-tune pegasus
on the XSUM dataset. Data download instructions at [examples/pytorch/summarization/](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization/README.md).
- [Causal language modeling task guide](../tasks/language_modeling)
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization) | 385_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusconfig | .md | This is the configuration class to store the configuration of a [`PegasusModel`]. It is used to instantiate an
PEGASUS model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the PEGASUS
[google/pegasus-large](https://huggingface.co/google/pegasus-large) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 385_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the PEGASUS model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`PegasusModel`] or [`TFPegasusModel`].
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer. | 385_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusconfig | .md | d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 12):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16): | 385_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusconfig | .md | decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): | 385_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusconfig | .md | activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities. | 385_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusconfig | .md | attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02): | 385_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusconfig | .md | just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0): | 385_8_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusconfig | .md | for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (`bool`, *optional*, defaults to `False`):
Scale embeddings by diving by sqrt(d_model).
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (`int`, *optional*, defaults to 1): | 385_8_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusconfig | .md | forced_eos_token_id (`int`, *optional*, defaults to 1):
The id of the token to force as the last generated token when `max_length` is reached. Usually set to
`eos_token_id`.
Example:
```python
>>> from transformers import PegasusConfig, PegasusModel | 385_8_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusconfig | .md | >>> # Initializing a PEGASUS google/pegasus-large style configuration
>>> configuration = PegasusConfig()
>>> # Initializing a model (with random weights) from the google/pegasus-large style configuration
>>> model = PegasusModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 385_8_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizer | .md | warning: `add_tokens` does not work at the moment.
Construct a PEGASUS tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that | 385_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizer | .md | Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip> | 385_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizer | .md | eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
mask_token (`str`, *optional*, defaults to `"<mask_2>"`): | 385_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizer | .md | token instead.
mask_token (`str`, *optional*, defaults to `"<mask_2>"`):
The token used for masking single token values. This is the token used when training this model with masked
language modeling (MLM). This is the token that the PEGASUS encoder will try to predict during pretraining.
It corresponds to *[MASK2]* in [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive
Summarization](https://arxiv.org/pdf/1912.08777.pdf).
mask_token_sent (`str`, *optional*, defaults to `"<mask_1>"`): | 385_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizer | .md | Summarization](https://arxiv.org/pdf/1912.08777.pdf).
mask_token_sent (`str`, *optional*, defaults to `"<mask_1>"`):
The token used for masking whole target sentences. This is the token used when training this model with gap
sentences generation (GSG). This is the sentence that the PEGASUS decoder will try to predict during
pretraining. It corresponds to *[MASK1]* in [PEGASUS: Pre-training with Extracted Gap-sentences for
Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf). | 385_9_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizer | .md | Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf).
additional_special_tokens (`List[str]`, *optional*):
Additional special tokens used by the tokenizer. If no additional_special_tokens are provided <mask_2> and
<unk_2, ..., unk_102> are used as additional special tokens corresponding to the [original PEGASUS
tokenizer](https://github.com/google-research/pegasus/blob/939830367bcf411193d2b5eca2f2f90f3f9260ca/pegasus/ops/pretrain_parsing_ops.cc#L66) | 385_9_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizer | .md | that uses the tokens 2 - 104 only for pretraining
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed. | 385_9_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizer | .md | - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout. | 385_9_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizerfast | .md | Construct a "fast" PEGASUS tokenizer (backed by HuggingFace's *tokenizers* library). Based on
[Unigram](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=unigram#models).
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`): | 385_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizerfast | .md | refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip> | 385_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizerfast | .md | eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
mask_token (`str`, *optional*, defaults to `"<mask_2>"`): | 385_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizerfast | .md | token instead.
mask_token (`str`, *optional*, defaults to `"<mask_2>"`):
The token used for masking single token values. This is the token used when training this model with masked
language modeling (MLM). This is the token that the PEGASUS encoder will try to predict during pretraining.
It corresponds to *[MASK2]* in [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive
Summarization](https://arxiv.org/pdf/1912.08777.pdf).
mask_token_sent (`str`, *optional*, defaults to `"<mask_1>"`): | 385_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizerfast | .md | Summarization](https://arxiv.org/pdf/1912.08777.pdf).
mask_token_sent (`str`, *optional*, defaults to `"<mask_1>"`):
The token used for masking whole target sentences. This is the token used when training this model with gap
sentences generation (GSG). This is the sentence that the PEGASUS decoder will try to predict during
pretraining. It corresponds to *[MASK1]* in [PEGASUS: Pre-training with Extracted Gap-sentences for
Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf). | 385_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizerfast | .md | Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf).
additional_special_tokens (`List[str]`, *optional*):
Additional special tokens used by the tokenizer. If no additional_special_tokens are provided <mask_2> and
<unk_2, ..., unk_102> are used as additional special tokens corresponding to the [original PEGASUS
tokenizer](https://github.com/google-research/pegasus/blob/939830367bcf411193d2b5eca2f2f90f3f9260ca/pegasus/ops/pretrain_parsing_ops.cc#L66) | 385_10_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasustokenizerfast | .md | that uses the tokens 2 - 104 only for pretraining
<frameworkcontent>
<pt> | 385_10_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusmodel | .md | The bare PEGASUS Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 385_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`PegasusConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 385_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 385_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusforconditionalgeneration | .md | The PEGASUS Model with a language modeling head. Can be used for summarization.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 385_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusforconditionalgeneration | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`PegasusConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 385_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusforconditionalgeneration | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 385_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#pegasusforcausallm | .md | No docstring available for PegasusForCausalLM
Methods: forward
</pt>
<tf> | 385_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#tfpegasusmodel | .md | No docstring available for TFPegasusModel
Methods: call | 385_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#tfpegasusforconditionalgeneration | .md | No docstring available for TFPegasusForConditionalGeneration
Methods: call
</tf>
<jax> | 385_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#flaxpegasusmodel | .md | No docstring available for FlaxPegasusModel
Methods: __call__
- encode
- decode | 385_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus.md | https://huggingface.co/docs/transformers/en/model_doc/pegasus/#flaxpegasusforconditionalgeneration | .md | No docstring available for FlaxPegasusForConditionalGeneration
Methods: __call__
- encode
- decode
</jax>
</frameworkcontent> | 385_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md | https://huggingface.co/docs/transformers/en/model_doc/poolformer/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 386_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md | https://huggingface.co/docs/transformers/en/model_doc/poolformer/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 386_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md | https://huggingface.co/docs/transformers/en/model_doc/poolformer/#overview | .md | The PoolFormer model was proposed in [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Sea AI Labs. Instead of designing complicated token mixer to achieve SOTA performance, the target of this work is to demonstrate the competence of transformer models largely stem from the general architecture MetaFormer.
The abstract from the paper is the following: | 386_1_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.