source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens.
If the attention window contains a token with global attention, the attention weight at the corresponding
index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global
attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be
accessed from `global_attentions`. | 420_10_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | accessed from `global_attentions`.
global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`,
where `x` is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the | 420_10_28 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
models.longformer.modeling_longformer.LongformerSequenceClassifierOutput
Base class for outputs of sentence classification models.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): | 420_10_29 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Classification (or regression if config.num_labels==1) loss.
logits (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`):
Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): | 420_10_30 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + | 420_10_31 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x +
attention_window + 1)`, where `x` is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first `x` values) and to every token in the attention window (remaining `attention_window | 420_10_32 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | global attention (first `x` values) and to every token in the attention window (remaining `attention_window
+ 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the
remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a
token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding | 420_10_33 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding
(succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens.
If the attention window contains a token with global attention, the attention weight at the corresponding
index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global | 420_10_34 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global
attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be
accessed from `global_attentions`.
global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, | 420_10_35 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`,
where `x` is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput | 420_10_36 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | in the sequence.
models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput
Base class for outputs of multiple choice Longformer models.
Args:
loss (`torch.FloatTensor` of shape *(1,)*, *optional*, returned when `labels` is provided):
Classification loss.
logits (`torch.FloatTensor` of shape `(batch_size, num_choices)`):
*num_choices* is the second dimension of the input tensors. (see *input_ids* above).
Classification scores (before SoftMax). | 420_10_37 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | *num_choices* is the second dimension of the input tensors. (see *input_ids* above).
Classification scores (before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`. | 420_10_38 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x +
attention_window + 1)`, where `x` is the number of tokens with global attention mask. | 420_10_39 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | attention_window + 1)`, where `x` is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first `x` values) and to every token in the attention window (remaining `attention_window
+ 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the | 420_10_40 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the
remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a
token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding
(succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. | 420_10_41 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens.
If the attention window contains a token with global attention, the attention weight at the corresponding
index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global
attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be
accessed from `global_attentions`. | 420_10_42 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | accessed from `global_attentions`.
global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`,
where `x` is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the | 420_10_43 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
models.longformer.modeling_longformer.LongformerTokenClassifierOutput
Base class for outputs of token classification models.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) :
Classification loss. | 420_10_44 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) :
Classification loss.
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.num_labels)`):
Classification scores (before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of | 420_10_45 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + | 420_10_46 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x +
attention_window + 1)`, where `x` is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first `x` values) and to every token in the attention window (remaining `attention_window | 420_10_47 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | global attention (first `x` values) and to every token in the attention window (remaining `attention_window
+ 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the
remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a
token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding | 420_10_48 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding
(succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens.
If the attention window contains a token with global attention, the attention weight at the corresponding
index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global | 420_10_49 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global
attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be
accessed from `global_attentions`.
global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, | 420_10_50 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`,
where `x` is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutput: | 420_10_51 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | in the sequence.
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutput:
modeling_tf_longformer requires the TensorFlow library but it was not found in your environment.
However, we were able to find a PyTorch installation. PyTorch classes do not begin
with "TF", but are otherwise identically named to our TF classes.
If you want to use PyTorch, please use those classes instead!
If you really do want to use TensorFlow, please follow the instructions on the | 420_10_52 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | If you really do want to use TensorFlow, please follow the instructions on the
installation page https://www.tensorflow.org/install that match your environment.
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutput:
modeling_tf_longformer requires the TensorFlow library but it was not found in your environment.
However, we were able to find a PyTorch installation. PyTorch classes do not begin
with "TF", but are otherwise identically named to our TF classes. | 420_10_53 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | with "TF", but are otherwise identically named to our TF classes.
If you want to use PyTorch, please use those classes instead!
If you really do want to use TensorFlow, please follow the instructions on the
installation page https://www.tensorflow.org/install that match your environment.
WithPooling
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMaskedLMOutput:
modeling_tf_longformer requires the TensorFlow library but it was not found in your environment. | 420_10_54 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | modeling_tf_longformer requires the TensorFlow library but it was not found in your environment.
However, we were able to find a PyTorch installation. PyTorch classes do not begin
with "TF", but are otherwise identically named to our TF classes.
If you want to use PyTorch, please use those classes instead!
If you really do want to use TensorFlow, please follow the instructions on the
installation page https://www.tensorflow.org/install that match your environment. | 420_10_55 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | installation page https://www.tensorflow.org/install that match your environment.
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerQuestionAnsweringModelOutput:
modeling_tf_longformer requires the TensorFlow library but it was not found in your environment.
However, we were able to find a PyTorch installation. PyTorch classes do not begin
with "TF", but are otherwise identically named to our TF classes.
If you want to use PyTorch, please use those classes instead! | 420_10_56 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | If you want to use PyTorch, please use those classes instead!
If you really do want to use TensorFlow, please follow the instructions on the
installation page https://www.tensorflow.org/install that match your environment.
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerSequenceClassifierOutput:
modeling_tf_longformer requires the TensorFlow library but it was not found in your environment.
However, we were able to find a PyTorch installation. PyTorch classes do not begin | 420_10_57 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | However, we were able to find a PyTorch installation. PyTorch classes do not begin
with "TF", but are otherwise identically named to our TF classes.
If you want to use PyTorch, please use those classes instead!
If you really do want to use TensorFlow, please follow the instructions on the
installation page https://www.tensorflow.org/install that match your environment.
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput: | 420_10_58 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput:
modeling_tf_longformer requires the TensorFlow library but it was not found in your environment.
However, we were able to find a PyTorch installation. PyTorch classes do not begin
with "TF", but are otherwise identically named to our TF classes.
If you want to use PyTorch, please use those classes instead!
If you really do want to use TensorFlow, please follow the instructions on the | 420_10_59 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | If you really do want to use TensorFlow, please follow the instructions on the
installation page https://www.tensorflow.org/install that match your environment.
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerTokenClassifierOutput:
modeling_tf_longformer requires the TensorFlow library but it was not found in your environment.
However, we were able to find a PyTorch installation. PyTorch classes do not begin
with "TF", but are otherwise identically named to our TF classes. | 420_10_60 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | with "TF", but are otherwise identically named to our TF classes.
If you want to use PyTorch, please use those classes instead!
If you really do want to use TensorFlow, please follow the instructions on the
installation page https://www.tensorflow.org/install that match your environment.
<frameworkcontent>
<pt> | 420_10_61 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformermodel | .md | The bare Longformer Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 420_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformermodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LongformerConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the | 420_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformermodel | .md | model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
This class copied code from [`RobertaModel`] and overwrote standard self-attention with longformer self-attention
to provide the ability to process long sequences following the self-attention approach described in [Longformer: | 420_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformermodel | .md | to provide the ability to process long sequences following the self-attention approach described in [Longformer:
the Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, and Arman Cohan.
Longformer self-attention combines a local (sliding window) and global attention to extend to long documents
without the O(n^2) increase in memory and compute.
The self-attention module `LongformerSelfAttention` implemented here supports the combination of local and global | 420_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformermodel | .md | The self-attention module `LongformerSelfAttention` implemented here supports the combination of local and global
attention but it lacks support for autoregressive attention and dilated attention. Autoregressive and dilated
attention are more relevant for autoregressive language modeling than finetuning on downstream tasks. Future
release will add support for autoregressive attention, but the support for dilated attention requires a custom CUDA
kernel to be memory and compute efficient.
Methods: forward | 420_11_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerformaskedlm | .md | Longformer Model with a `language modeling` head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 420_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerformaskedlm | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LongformerConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the | 420_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerformaskedlm | .md | model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 420_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerforsequenceclassification | .md | Longformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 420_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerforsequenceclassification | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LongformerConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the | 420_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerforsequenceclassification | .md | model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 420_13_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerformultiplechoice | .md | Longformer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 420_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerformultiplechoice | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LongformerConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the | 420_14_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerformultiplechoice | .md | model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 420_14_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerfortokenclassification | .md | Longformer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 420_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerfortokenclassification | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LongformerConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the | 420_15_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerfortokenclassification | .md | model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 420_15_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerforquestionanswering | .md | Longformer Model with a span classification head on top for extractive question-answering tasks like SQuAD /
TriviaQA (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 420_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerforquestionanswering | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LongformerConfig`]): Model configuration class with all the parameters of the | 420_16_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerforquestionanswering | .md | and behavior.
Parameters:
config ([`LongformerConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf> | 420_16_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#tflongformermodel | .md | No docstring available for TFLongformerModel
Methods: call | 420_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#tflongformerformaskedlm | .md | No docstring available for TFLongformerForMaskedLM
Methods: call | 420_18_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#tflongformerforquestionanswering | .md | No docstring available for TFLongformerForQuestionAnswering
Methods: call | 420_19_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#tflongformerforsequenceclassification | .md | No docstring available for TFLongformerForSequenceClassification
Methods: call | 420_20_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#tflongformerfortokenclassification | .md | No docstring available for TFLongformerForTokenClassification
Methods: call | 420_21_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#tflongformerformultiplechoice | .md | No docstring available for TFLongformerForMultipleChoice
Methods: call
</tf>
</frameworkcontent> | 420_22_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/ | .md | <!--Copyright 2022 NVIDIA and The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | 421_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/ | .md | Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 421_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#overview | .md | The GroupViT model was proposed in [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
Inspired by [CLIP](clip), GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories.
The abstract from the paper is the following: | 421_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#overview | .md | *Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision | 421_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#overview | .md | which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together | 421_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#overview | .md | via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision.* | 421_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#overview | .md | This model was contributed by [xvjiarui](https://huggingface.co/xvjiarui). The TensorFlow version was contributed by [ariG23498](https://huggingface.co/ariG23498) with the help of [Yih-Dar SHIEH](https://huggingface.co/ydshieh), [Amy Roberts](https://huggingface.co/amyeroberts), and [Joao Gante](https://huggingface.co/joaogante).
The original code can be found [here](https://github.com/NVlabs/GroupViT). | 421_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#usage-tips | .md | - You may specify `output_segmentation=True` in the forward of `GroupViTModel` to get the segmentation logits of input texts. | 421_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GroupViT.
- The quickest way to get started with GroupViT is by checking the [example notebooks](https://github.com/xvjiarui/GroupViT/blob/main/demo/GroupViT_hf_inference_notebook.ipynb) (which showcase zero-shot segmentation inference).
- One can also check out the [HuggingFace Spaces demo](https://huggingface.co/spaces/xvjiarui/GroupViT) to play with GroupViT. | 421_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitconfig | .md | [`GroupViTConfig`] is the configuration class to store the configuration of a [`GroupViTModel`]. It is used to
instantiate a GroupViT model according to the specified arguments, defining the text model and vision model
configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the GroupViT
[nvidia/groupvit-gcc-yfcc](https://huggingface.co/nvidia/groupvit-gcc-yfcc) architecture. | 421_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitconfig | .md | [nvidia/groupvit-gcc-yfcc](https://huggingface.co/nvidia/groupvit-gcc-yfcc) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`GroupViTTextConfig`].
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`GroupViTVisionConfig`]. | 421_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitconfig | .md | vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`GroupViTVisionConfig`].
projection_dim (`int`, *optional*, defaults to 256):
Dimensionality of text and vision projection layers.
projection_intermediate_dim (`int`, *optional*, defaults to 4096):
Dimensionality of intermediate layer of text and vision projection layers.
logit_scale_init_value (`float`, *optional*, defaults to 2.6592): | 421_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitconfig | .md | logit_scale_init_value (`float`, *optional*, defaults to 2.6592):
The initial value of the *logit_scale* parameter. Default is used as per the original GroupViT
implementation.
kwargs (*optional*):
Dictionary of keyword arguments.
Methods: from_text_vision_configs | 421_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvittextconfig | .md | This is the configuration class to store the configuration of a [`GroupViTTextModel`]. It is used to instantiate an
GroupViT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the GroupViT
[nvidia/groupvit-gcc-yfcc](https://huggingface.co/nvidia/groupvit-gcc-yfcc) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 421_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvittextconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 49408):
Vocabulary size of the GroupViT text model. Defines the number of different tokens that can be represented
by the `inputs_ids` passed when calling [`GroupViTModel`].
hidden_size (`int`, *optional*, defaults to 256):
Dimensionality of the encoder layers and the pooler layer. | 421_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvittextconfig | .md | hidden_size (`int`, *optional*, defaults to 256):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 1024):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 4):
Number of attention heads for each attention layer in the Transformer encoder. | 421_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvittextconfig | .md | Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (`int`, *optional*, defaults to 77):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, | 421_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvittextconfig | .md | The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
dropout (`float`, *optional*, defaults to 0.0): | 421_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvittextconfig | .md | The dropout ratio for the attention probabilities.
dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization | 421_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvittextconfig | .md | A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
Example:
```python
>>> from transformers import GroupViTTextConfig, GroupViTTextModel | 421_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvittextconfig | .md | >>> # Initializing a GroupViTTextModel with nvidia/groupvit-gcc-yfcc style configuration
>>> configuration = GroupViTTextConfig()
>>> model = GroupViTTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 421_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitvisionconfig | .md | This is the configuration class to store the configuration of a [`GroupViTVisionModel`]. It is used to instantiate
an GroupViT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the GroupViT
[nvidia/groupvit-gcc-yfcc](https://huggingface.co/nvidia/groupvit-gcc-yfcc) architecture. | 421_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitvisionconfig | .md | [nvidia/groupvit-gcc-yfcc](https://huggingface.co/nvidia/groupvit-gcc-yfcc) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 384):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 1536): | 421_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitvisionconfig | .md | Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 1536):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
depths (`List[int]`, *optional*, defaults to [6, 3, 3]):
The number of layers in each encoder block.
num_group_tokens (`List[int]`, *optional*, defaults to [64, 8, 0]):
The number of group tokens for each stage.
num_output_groups (`List[int]`, *optional*, defaults to [64, 8, 8]): | 421_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitvisionconfig | .md | The number of group tokens for each stage.
num_output_groups (`List[int]`, *optional*, defaults to [64, 8, 8]):
The number of output groups for each stage, 0 means no group.
num_attention_heads (`int`, *optional*, defaults to 6):
Number of attention heads for each attention layer in the Transformer encoder.
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch. | 421_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitvisionconfig | .md | The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
The epsilon used by the layer normalization layers. | 421_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitvisionconfig | .md | layer_norm_eps (`float`, *optional*, defaults to 1e-5):
The epsilon used by the layer normalization layers.
dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02): | 421_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitvisionconfig | .md | The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
Example:
```python
>>> from transformers import GroupViTVisionConfig, GroupViTVisionModel | 421_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitvisionconfig | .md | >>> # Initializing a GroupViTVisionModel with nvidia/groupvit-gcc-yfcc style configuration
>>> configuration = GroupViTVisionConfig()
>>> model = GroupViTVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
<frameworkcontent>
<pt> | 421_6_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitmodel | .md | This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`GroupViTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 421_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitmodel | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- get_text_features
- get_image_features | 421_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvittextmodel | .md | No docstring available for GroupViTTextModel
Methods: forward | 421_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#groupvitvisionmodel | .md | No docstring available for GroupViTVisionModel
Methods: forward
</pt>
<tf> | 421_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#tfgroupvitmodel | .md | No docstring available for TFGroupViTModel
Methods: call
- get_text_features
- get_image_features | 421_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#tfgroupvittextmodel | .md | No docstring available for TFGroupViTTextModel
Methods: call | 421_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/groupvit.md | https://huggingface.co/docs/transformers/en/model_doc/groupvit/#tfgroupvitvisionmodel | .md | No docstring available for TFGroupViTVisionModel
Methods: call
</tf>
</frameworkcontent> | 421_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 422_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 422_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#overview | .md | The Pix2Struct model was proposed in [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
The abstract from the paper is the following: | 422_1_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.