source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
This is the configuration class to store the configuration of a [`Data2VecVisionModel`]. It is used to instantiate
an Data2VecVision model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Data2VecVision
[facebook/data2vec-vision-base](https://huggingface.co/facebook/data2vec-vision-base) architecture.
Args:
hidden_size (`int`, *optional*, defaults to 768):
|
181_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
181_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
181_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
|
181_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
use_mask_token (`bool`, *optional*, defaults to `False`):
Whether to use a mask token for masked image modeling.
|
181_9_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
use_mask_token (`bool`, *optional*, defaults to `False`):
Whether to use a mask token for masked image modeling.
use_absolute_position_embeddings (`bool`, *optional*, defaults to `False`):
Whether to use BERT-style absolute position embeddings.
use_relative_position_bias (`bool`, *optional*, defaults to `False`):
Whether to use T5-style relative position embeddings in the self-attention layers.
use_shared_relative_position_bias (`bool`, *optional*, defaults to `False`):
|
181_9_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
use_shared_relative_position_bias (`bool`, *optional*, defaults to `False`):
Whether to use the same relative position embeddings across all self-attention layers of the Transformer.
layer_scale_init_value (`float`, *optional*, defaults to 0.1):
Scale to use in the self-attention layers. 0.1 for base, 1e-5 for large. Set 0 to disable layer scale.
drop_path_rate (`float`, *optional*, defaults to 0.1):
Stochastic depth rate per sample (when applied in the main path of residual layers).
|
181_9_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
Stochastic depth rate per sample (when applied in the main path of residual layers).
use_mean_pooling (`bool`, *optional*, defaults to `True`):
Whether to mean pool the final hidden states of the patches instead of using the final hidden state of the
CLS token, before applying the classification head.
out_indices (`List[int]`, *optional*, defaults to `[3, 5, 7, 11]`):
Indices of the feature maps to use for semantic segmentation.
pool_scales (`Tuple[int]`, *optional*, defaults to `[1, 2, 3, 6]`):
|
181_9_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
pool_scales (`Tuple[int]`, *optional*, defaults to `[1, 2, 3, 6]`):
Pooling scales used in Pooling Pyramid Module applied on the last feature map.
use_auxiliary_head (`bool`, *optional*, defaults to `True`):
Whether to use an auxiliary head during training.
auxiliary_loss_weight (`float`, *optional*, defaults to 0.4):
Weight of the cross-entropy loss of the auxiliary head.
auxiliary_channels (`int`, *optional*, defaults to 256):
Number of channels to use in the auxiliary head.
|
181_9_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
auxiliary_channels (`int`, *optional*, defaults to 256):
Number of channels to use in the auxiliary head.
auxiliary_num_convs (`int`, *optional*, defaults to 1):
Number of convolutional layers to use in the auxiliary head.
auxiliary_concat_input (`bool`, *optional*, defaults to `False`):
Whether to concatenate the output of the auxiliary head with the input before the classification layer.
semantic_loss_ignore_index (`int`, *optional*, defaults to 255):
|
181_9_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
semantic_loss_ignore_index (`int`, *optional*, defaults to 255):
The index that is ignored by the loss function of the semantic segmentation model.
Example:
```python
>>> from transformers import Data2VecVisionConfig, Data2VecVisionModel
|
181_9_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionconfig
|
.md
|
>>> # Initializing a Data2VecVision data2vec_vision-base-patch16-224-in22k style configuration
>>> configuration = Data2VecVisionConfig()
>>> # Initializing a model (with random weights) from the data2vec_vision-base-patch16-224-in22k style configuration
>>> model = Data2VecVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
<frameworkcontent>
<pt>
|
181_9_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudiomodel
|
.md
|
The bare Data2VecAudio Model transformer outputting raw hidden-states without any specific head on top.
Data2VecAudio was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudiomodel
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
181_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudiomodel
|
.md
|
behavior.
Parameters:
config ([`Data2VecAudioConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforaudioframeclassification
|
.md
|
Data2VecAudio Model with a frame classification head on top for tasks like Speaker Diarization.
Data2VecAudio was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforaudioframeclassification
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
181_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforaudioframeclassification
|
.md
|
behavior.
Parameters:
config ([`Data2VecAudioConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforctc
|
.md
|
Data2VecAudio Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC).
Data2VecAudio was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforctc
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
181_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforctc
|
.md
|
behavior.
Parameters:
config ([`Data2VecAudioConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforsequenceclassification
|
.md
|
Data2VecAudio Model with a sequence classification head on top (a linear layer over the pooled output) for tasks
like SUPERB Keyword Spotting.
Data2VecAudio was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforsequenceclassification
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
181_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforsequenceclassification
|
.md
|
behavior.
Parameters:
config ([`Data2VecAudioConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforxvector
|
.md
|
Data2VecAudio Model with an XVector feature extraction head on top for tasks like Speaker Verification.
Data2VecAudio was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforxvector
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
181_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioforxvector
|
.md
|
behavior.
Parameters:
config ([`Data2VecAudioConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_14_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextmodel
|
.md
|
The bare Data2VecText Model for text transformer outputting raw hidden-states without any specific head on top.
Data2VecText was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextmodel
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
181_15_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextmodel
|
.md
|
and behavior.
Parameters:
config ([`Data2VecTextConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
|
181_15_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextmodel
|
.md
|
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in *Attention is
all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
|
181_15_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextmodel
|
.md
|
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
`add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
.. _*Attention is all you need*: https://arxiv.org/abs/1706.03762
Methods: forward
|
181_15_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextforcausallm
|
.md
|
Data2VecText Model with a `language modeling` head on top for CLM fine-tuning.
Data2VecText was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextforcausallm
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
181_16_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextforcausallm
|
.md
|
and behavior.
Parameters:
config ([`Data2VecTextConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_16_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextformaskedlm
|
.md
|
data2vec Model with a `language modeling` head on top.
Data2VecText was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextformaskedlm
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
181_17_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextformaskedlm
|
.md
|
and behavior.
Parameters:
config ([`Data2VecTextConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_17_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextforsequenceclassification
|
.md
|
Data2VecText Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
Data2VecText was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextforsequenceclassification
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
181_18_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`Data2VecTextConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_18_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextformultiplechoice
|
.md
|
Data2VecText Model with a multiple choice classification head on top (a linear layer on top of the pooled output
and a softmax) e.g. for RocStories/SWAG tasks.
Data2VecText was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextformultiplechoice
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
181_19_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextformultiplechoice
|
.md
|
and behavior.
Parameters:
config ([`Data2VecTextConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_19_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextfortokenclassification
|
.md
|
Data2VecText Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
Data2VecText was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
181_20_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextfortokenclassification
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
181_20_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextfortokenclassification
|
.md
|
and behavior.
Parameters:
config ([`Data2VecTextConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_20_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextforquestionanswering
|
.md
|
Data2VecText Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
Data2VecText was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
|
181_21_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextforquestionanswering
|
.md
|
Michael Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
181_21_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`Data2VecTextConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_21_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionmodel
|
.md
|
The bare Data2VecVision Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Data2VecVisionConfig`]): Model configuration class with all the parameters of the model.
|
181_22_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionmodel
|
.md
|
behavior.
Parameters:
config ([`Data2VecVisionConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_22_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionforimageclassification
|
.md
|
Data2VecVision Model transformer with an image classification head on top (a linear layer on top of the average of
the final hidden states of the patch tokens) e.g. for ImageNet.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
181_23_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionforimageclassification
|
.md
|
behavior.
Parameters:
config ([`Data2VecVisionConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
181_23_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionforsemanticsegmentation
|
.md
|
Data2VecVision Model transformer with a semantic segmentation head on top e.g. for ADE20k, CityScapes.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Data2VecVisionConfig`]): Model configuration class with all the parameters of the model.
|
181_24_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecvisionforsemanticsegmentation
|
.md
|
behavior.
Parameters:
config ([`Data2VecVisionConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
181_24_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#tfdata2vecvisionmodel
|
.md
|
No docstring available for TFData2VecVisionModel
Methods: call
|
181_25_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#tfdata2vecvisionforimageclassification
|
.md
|
No docstring available for TFData2VecVisionForImageClassification
Methods: call
|
181_26_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#tfdata2vecvisionforsemanticsegmentation
|
.md
|
No docstring available for TFData2VecVisionForSemanticSegmentation
Methods: call
</tf>
</frameworkcontent>
|
181_27_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
182_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
182_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/#overview
|
.md
|
The T5 model was presented in [Unifying Language Learning Paradigms](https://arxiv.org/pdf/2205.05131v1.pdf) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler.
The abstract from the paper is the following:
|
182_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/#overview
|
.md
|
*Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives -- two concepts that are commonly conflated. Next, we present a generalized and unified perspective for
|
182_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/#overview
|
.md
|
pre-training objectives -- two concepts that are commonly conflated. Next, we present a generalized and unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is
|
182_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/#overview
|
.md
|
diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pre-training objectives and find that our method pushes the Pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups. Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks
|
182_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/#overview
|
.md
|
Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL
|
182_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/#overview
|
.md
|
strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization.*
|
182_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/#overview
|
.md
|
This model was contributed by [DanielHesslow](https://huggingface.co/Seledorn). The original code can be found [here](https://github.com/google-research/google-research/tree/master/ul2).
|
182_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/#usage-tips
|
.md
|
- UL2 is an encoder-decoder model pre-trained on a mixture of denoising functions as well as fine-tuned on an array of downstream tasks.
- UL2 has the same architecture as [T5v1.1](t5v1.1) but uses the Gated-SiLU activation function instead of Gated-GELU.
- The authors release checkpoints of one architecture which can be seen [here](https://huggingface.co/google/ul2)
<Tip>
|
182_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ul2.md
|
https://huggingface.co/docs/transformers/en/model_doc/ul2/#usage-tips
|
.md
|
- The authors release checkpoints of one architecture which can be seen [here](https://huggingface.co/google/ul2)
<Tip>
As UL2 has the same architecture as T5v1.1, refer to [T5's documentation page](t5) for API reference, tips, code examples and notebooks.
</Tip>
|
182_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
183_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
183_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#overview
|
.md
|
The OLMo model was proposed in [OLMo: Accelerating the Science of Language Models](https://arxiv.org/abs/2402.00838) by Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E.
|
183_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#overview
|
.md
|
Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi.
|
183_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#overview
|
.md
|
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models. The OLMo models are trained on the Dolma dataset. We release all code, checkpoints, logs (coming soon), and details involved in training these models.
The abstract from the paper is the following:
|
183_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#overview
|
.md
|
*Language models (LMs) have become ubiquitous in both NLP research and in commercial product offerings. As their commercial importance has surged, the most powerful models have become closed off, gated behind proprietary interfaces, with important details of their training data, architectures, and development undisclosed. Given the importance of these details in scientifically studying these models, including their biases and potential risks, we believe it is essential for the research community to have
|
183_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#overview
|
.md
|
these models, including their biases and potential risks, we believe it is essential for the research community to have access to powerful, truly open LMs. To this end, this technical report details the first release of OLMo, a state-of-the-art, truly Open Language Model and its framework to build and study the science of language modeling. Unlike most prior efforts that have only released model weights and inference code, we release OLMo and the whole framework, including training data and training and
|
183_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#overview
|
.md
|
released model weights and inference code, we release OLMo and the whole framework, including training data and training and evaluation code. We hope this release will empower and strengthen the open research community and inspire a new wave of innovation.*
|
183_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#overview
|
.md
|
This model was contributed by [shanearora](https://huggingface.co/shanearora).
The original code can be found [here](https://github.com/allenai/OLMo/tree/main/olmo).
|
183_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
This is the configuration class to store the configuration of a [`OlmoModel`]. It is used to instantiate an OLMo
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the [allenai/OLMo-7B-hf](https://huggingface.co/allenai/OLMo-7B-hf).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
183_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50304):
Vocabulary size of the OLMo model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`OlmoModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
|
183_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
|
183_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
183_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
183_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (`bool`, *optional*, defaults to `True`):
|
183_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*, defaults to 1):
Padding token id.
bos_token_id (`int`, *optional*):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 50279):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
|
183_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
|
183_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
`max_position_embeddings` to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
|
183_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
clip_qkv (`float`, *optional*):
|
183_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
The dropout ratio for the attention probabilities.
clip_qkv (`float`, *optional*):
If not `None`, elements of query, key and value attention states are clipped so that their
absolute value does not exceed this value.
```python
>>> from transformers import OlmoModel, OlmoConfig
|
183_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoconfig
|
.md
|
>>> # Initializing a OLMo 7B style configuration
>>> configuration = OlmoConfig()
>>> # Initializing a model from the OLMo 7B style configuration
>>> model = OlmoModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
183_2_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmomodel
|
.md
|
The bare Olmo Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
183_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmomodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`OlmoConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
183_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmomodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`OlmoDecoderLayer`]
Args:
config: OlmoConfig
Methods: forward
|
183_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmo.md
|
https://huggingface.co/docs/transformers/en/model_doc/olmo/#olmoforcausallm
|
.md
|
No docstring available for OlmoForCausalLM
Methods: forward
|
183_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
184_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
184_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#xlm-roberta
|
.md
|
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=xlm-roberta">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlm--roberta-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/xlm-roberta-base">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
|
184_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#overview
|
.md
|
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume
Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's
RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl
data.
The abstract from the paper is the following:
|
184_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta.md
|
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta/#overview
|
.md
|
data.
The abstract from the paper is the following:
*This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a
wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred
languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly
outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on
|
184_2_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.