source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridimageprocessor
|
.md
|
method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess`
method.
do_normalize:
Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method.
|
179_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridimageprocessor
|
.md
|
method.
do_normalize:
Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
|
179_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridimageprocessor
|
.md
|
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
Methods: preprocess
|
179_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridmodel
|
.md
|
The bare ViT Hybrid Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ViTHybridConfig`]): Model configuration class with all the parameters of the model.
|
179_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridmodel
|
.md
|
behavior.
Parameters:
config ([`ViTHybridConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
179_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridforimageclassification
|
.md
|
ViT Hybrid Model transformer with an image classification head on top (a linear layer on top of the final hidden
state of the [CLS] token) e.g. for ImageNet.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ViTHybridConfig`]): Model configuration class with all the parameters of the model.
|
179_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_hybrid.md
|
https://huggingface.co/docs/transformers/en/model_doc/vit_hybrid/#vithybridforimageclassification
|
.md
|
behavior.
Parameters:
config ([`ViTHybridConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
179_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
180_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
180_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zamba
|
.md
|
Zamba is a large language model (LLM) trained by Zyphra, and made available under an Apache 2.0 license. Please see the [Zyphra Hugging Face](https://huggingface.co/collections/zyphra/) repository for model weights.
This model was contributed by [pglo](https://huggingface.co/pglo).
|
180_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#model-details
|
.md
|
Zamba-7B-v1 is a hybrid between state-space models (Specifically [Mamba](https://github.com/state-spaces/mamba)) and transformer, and was trained using next-token prediction. Zamba uses a shared transformer layer after every 6 mamba blocks. It uses the [Mistral v0.1 tokenizer](https://huggingface.co/mistralai/Mistral-7B-v0.1). We came to this architecture after a series of ablations at small scales. Zamba-7B-v1 was pre-trained on 1T tokens of text and code data.
|
180_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#model-details
|
.md
|
<img src=https://github.com/user-attachments/assets/c2cff209-b901-483c-87aa-774b82a0769f width=30% height=40% />
|
180_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#presequities
|
.md
|
Zamba requires you use `transformers` version 4.46.0 or higher:
```bash
pip install transformers>=4.45.0
```
In order to run optimized Mamba implementations, you first need to install `mamba-ssm` and `causal-conv1d`:
```bash
pip install mamba-ssm causal-conv1d>=1.2.0
```
You also have to have the model on a CUDA device.
|
180_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#presequities
|
.md
|
```bash
pip install mamba-ssm causal-conv1d>=1.2.0
```
You also have to have the model on a CUDA device.
You can run the model not using the optimized Mamba kernels, but it is **not** recommended as it will result in significantly lower latencies. In order to do that, you'll need to specify `use_mamba_kernels=False` when loading the model.
|
180_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#inference
|
.md
|
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba-7B-v1")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba-7B-v1", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "A funny prompt would be "
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
```
|
180_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#model-card
|
.md
|
The model cards can be found at:
* [Zamba-7B](MODEL_CARD_ZAMBA-7B-v1.md)
|
180_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#issues
|
.md
|
For issues with model output, or community discussion, please use the Hugging Face community [forum](https://huggingface.co/zyphra/zamba-7b)
|
180_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#license
|
.md
|
The model weights are open-sourced via an Apache 2.0 license.
|
180_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
This is the configuration class to store the configuration of a [`ZambaModel`]. It is used to instantiate a
Zamba model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Zamba-v0.1 model.
[Zyphra/Zamba-7B-v1](https://huggingface.co/Zyphra/Zamba-7B-v1)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
180_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the Zamba model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`ZambaModel`]
tie_word_embeddings (`bool`, *optional*, defaults to `True`):
|
180_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
`inputs_ids` passed when calling [`ZambaModel`]
tie_word_embeddings (`bool`, *optional*, defaults to `True`):
Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the
model has a output word embedding layer.
hidden_size (`int`, *optional*, defaults to 3712):
Dimension of the hidden representations.
attention_hidden_size (`int`, *optional*):
Dimension of the hidden representations of the inputs to the Attention layer.
|
180_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
attention_hidden_size (`int`, *optional*):
Dimension of the hidden representations of the inputs to the Attention layer.
intermediate_size (`int`, *optional*, defaults to 14848):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 76):
Number of hidden layers in the model.
num_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
attention_head_dim (`int`, *optional*):
|
180_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
Number of attention heads for each attention layer in the Transformer decoder.
attention_head_dim (`int`, *optional*):
Dimension of the attention head in the Transformer decoder.
num_key_value_heads (`int`, *optional*, defaults to 16):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=None`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
180_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf).
n_mamba_heads (`int`, *optional*, defaults to 2):
Number of mamba heads for each mamba layer.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
|
180_8_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
Number of mamba heads for each mamba layer.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the decoder.
hidden_mamba_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the mamba layer.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
180_8_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
num_logits_to_keep (`int` or `None`, *optional*, defaults to 1):
|
180_8_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
relevant if `config.is_decoder=True`.
num_logits_to_keep (`int` or `None`, *optional*, defaults to 1):
Number of prompt logits to calculate during generation. If `None`, all logits will be calculated. If an
integer value, only last `num_logits_to_keep` logits will be calculated. Default is 1 because only the
logits of the last prompt token are needed for generation. For long sequences, the logits for the entire
sequence may use a lot of memory so, setting `num_logits_to_keep=1` will reduce memory footprint
|
180_8_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
sequence may use a lot of memory so, setting `num_logits_to_keep=1` will reduce memory footprint
significantly.
pad_token_id (`int`, *optional*, defaults to 0):
The id of the padding token.
bos_token_id (`int`, *optional*, defaults to 1):
The id of the "beginning-of-sequence" token.
eos_token_id (`int`, *optional*, defaults to 2):
The id of the "end-of-sequence" token.
max_position_embeddings (`int`, *optional*, defaults to 4096):
|
180_8_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
The id of the "end-of-sequence" token.
max_position_embeddings (`int`, *optional*, defaults to 4096):
This value doesn't have any real effect. The maximum sequence length that this model is intended to be
used with. It can be used with longer sequences, but performance may degrade.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
attn_layer_period (`int`, *optional*, defaults to 6):
Once in this many layers, we will have a shared attention layer
|
180_8_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
attn_layer_period (`int`, *optional*, defaults to 6):
Once in this many layers, we will have a shared attention layer
attn_layer_offset (`int`, *optional*, defaults to 4):
Offset of the shared attention layer
use_mamba_kernels (`bool`, *optional*, defaults to `True`):
Flag indicating whether or not to use the fast mamba kernels. These are available only if `mamba-ssm` and
`causal-conv1d` are installed, and the mamba modules are running on a CUDA device. Raises ValueError if
|
180_8_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
`causal-conv1d` are installed, and the mamba modules are running on a CUDA device. Raises ValueError if
`True` and kernels are not available
mamba_d_state (`int`, *optional*, defaults to 16):
The dimension the mamba state space latents
mamba_d_conv (`int`, *optional*, defaults to 4):
The size of the mamba convolution kernel
mamba_expand (`int`, *optional*, defaults to 2):
Expanding factor (relative to hidden_size) used to determine the mamba intermediate size
|
180_8_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
Expanding factor (relative to hidden_size) used to determine the mamba intermediate size
mamba_dt_rank (`Union[int,str]`, *optional*, defaults to `"auto"`):
Rank of the mamba discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)`
time_step_min (`float`, *optional*, defaults to 0.001):
Minimum `time_step` used to bound `dt_proj_bias`.
time_step_max (`float`, *optional*, defaults to 0.1):
Maximum `time_step` used to bound `dt_proj_bias`.
|
180_8_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
time_step_max (`float`, *optional*, defaults to 0.1):
Maximum `time_step` used to bound `dt_proj_bias`.
time_step_floor (`float`, *optional*, defaults to 0.0001):
Minimum clamping value of the `dt_proj.bias` layer initialization.
mamba_conv_bias (`bool`, *optional*, defaults to `True`):
Flag indicating whether or not to use bias in the convolution layer of the mamba mixer block.
mamba_proj_bias (`bool`, *optional*, defaults to `False`):
|
180_8_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaconfig
|
.md
|
mamba_proj_bias (`bool`, *optional*, defaults to `False`):
Flag indicating whether or not to use bias in the input and output projections (["in_proj", "out_proj"]) of the mamba mixer block
|
180_8_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambamodel
|
.md
|
The bare Zamba Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
180_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambamodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ZambaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
180_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambamodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`ZambaDecoderLayer`]
Args:
config: ZambaConfig
Methods: forward
|
180_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaforcausallm
|
.md
|
No docstring available for ZambaForCausalLM
Methods: forward
|
180_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaforsequenceclassification
|
.md
|
The Zamba Model with a sequence classification head on top (linear layer).
[`ZambaForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
180_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaforsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
180_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaforsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
180_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/zamba/#zambaforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`ZambaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
180_11_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
181_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
181_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#overview
|
.md
|
The Data2Vec model was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli.
Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images.
|
181_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#overview
|
.md
|
Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images.
Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets.
The abstract from the paper is the following:
*While the general idea of self-supervised learning is identical across modalities, the actual algorithms and
|
181_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#overview
|
.md
|
*While the general idea of self-supervised learning is identical across modalities, the actual algorithms and
objectives differ widely because they were developed with a single modality in mind. To get us closer to general
self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data based on a
|
181_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#overview
|
.md
|
NLP or computer vision. The core idea is to predict latent representations of the full input data based on a
masked view of the input in a selfdistillation setup using a standard Transformer architecture.
Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which
are local in nature, data2vec predicts contextualized latent representations that contain information from
|
181_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#overview
|
.md
|
are local in nature, data2vec predicts contextualized latent representations that contain information from
the entire input. Experiments on the major benchmarks of speech recognition, image classification, and
natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
Models and code are available at www.github.com/pytorch/fairseq/tree/master/examples/data2vec.*
|
181_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#overview
|
.md
|
Models and code are available at www.github.com/pytorch/fairseq/tree/master/examples/data2vec.*
This model was contributed by [edugp](https://huggingface.co/edugp) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
[sayakpaul](https://github.com/sayakpaul) and [Rocketknight1](https://github.com/Rocketknight1) contributed Data2Vec for vision in TensorFlow.
The original code (for NLP and Speech) can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/data2vec).
|
181_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#overview
|
.md
|
The original code (for NLP and Speech) can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/data2vec).
The original code for vision can be found [here](https://github.com/facebookresearch/data2vec_vision/tree/main/beit).
|
181_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#usage-tips
|
.md
|
- Data2VecAudio, Data2VecText, and Data2VecVision have all been trained using the same self-supervised learning method.
- For Data2VecAudio, preprocessing is identical to [`Wav2Vec2Model`], including feature extraction
- For Data2VecText, preprocessing is identical to [`RobertaModel`], including tokenization.
- For Data2VecVision, preprocessing is identical to [`BeitModel`], including feature extraction.
|
181_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#using-scaled-dot-product-attention-sdpa
|
.md
|
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.
|
181_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#using-scaled-dot-product-attention-sdpa
|
.md
|
page for more information.
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
The SDPA implementation is currently available for the Data2VecAudio and Data2VecVision models.
```
from transformers import Data2VecVisionForImageClassification
|
181_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#using-scaled-dot-product-attention-sdpa
|
.md
|
```
from transformers import Data2VecVisionForImageClassification
model = Data2VecVisionForImageClassification.from_pretrained("facebook/data2vec-vision-base", attn_implementation="sdpa", torch_dtype=torch.float16)
...
```
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
For the Data2VecVision model, on a local benchmark (NVIDIA GeForce RTX 2060-8GB, PyTorch 2.5.1, OS Ubuntu 20.04)
|
181_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#using-scaled-dot-product-attention-sdpa
|
.md
|
For the Data2VecVision model, on a local benchmark (NVIDIA GeForce RTX 2060-8GB, PyTorch 2.5.1, OS Ubuntu 20.04)
with `float16` and `facebook/data2vec-vision-base` model, we saw the following improvements during training and
inference:
|
181_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#training
|
.md
|
| num_training_steps | batch_size | image_size | is_cuda | Time per batch (eager - s) | Time per batch (sdpa - s) | Speedup (%) | Eager peak mem (MB) | SDPA peak mem (MB) | Mem saving (%) |
|--------------------|------------|--------------|---------|----------------------------|---------------------------|-------------|----------------------|--------------------|----------------|
|
181_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#training
|
.md
|
| 50 | 2 | (1048, 640) | True | 0.996 | 0.754 | 32.147 | 6722.198 | 4264.653 | 57.626 |
|
181_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#inference
|
.md
|
| Image batch size | Eager (s/iter) | Eager CI, % | Eager memory (MB) | SDPA (s/iter) | SDPA CI, % | SDPA memory (MB) | SDPA speedup | SDPA memory saved |
|-------------------:|-----------------:|:--------------|--------------------:|----------------:|:-------------|-------------------:|---------------:|--------------------:|
|
181_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#inference
|
.md
|
| 1 | 0.011 | ±0.3% | 3.76143e+08 | 0.01 | ±0.3% | 3.74397e+08 | 1.101 | 0.466 |
| 4 | 0.014 | ±0.1% | 4.02756e+08 | 0.012 | ±0.2% | 3.91373e+08 | 1.219 | 2.909 |
|
181_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#inference
|
.md
|
| 16 | 0.046 | ±0.3% | 4.96482e+08 | 0.035 | ±0.2% | 4.51017e+08 | 1.314 | 10.081 |
| 32 | 0.088 | ±0.1% | 6.23903e+08 | 0.067 | ±0.1% | 5.32974e+08 | 1.33 | 17.061 |
|
181_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Data2Vec.
<PipelineTag pipeline="image-classification"/>
- [`Data2VecVisionForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
181_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#resources
|
.md
|
- To fine-tune [`TFData2VecVisionForImageClassification`] on a custom dataset, see [this notebook](https://colab.research.google.com/github/sayakpaul/TF-2.0-Hacks/blob/master/data2vec_vision_image_classification.ipynb).
**Data2VecText documentation resources**
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
|
181_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#resources
|
.md
|
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
**Data2VecAudio documentation resources**
- [Audio classification task guide](../tasks/audio_classification)
- [Automatic speech recognition task guide](../tasks/asr)
**Data2VecVision documentation resources**
|
181_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#resources
|
.md
|
- [Automatic speech recognition task guide](../tasks/asr)
**Data2VecVision documentation resources**
- [Image classification](../tasks/image_classification)
- [Semantic segmentation](../tasks/semantic_segmentation)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
181_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextconfig
|
.md
|
This is the configuration class to store the configuration of a [`Data2VecTextModel`] and [`Data2VecTextModel`]. It
is used to instantiate a Data2VecText model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Data2VecText
[facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) architecture.
|
181_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextconfig
|
.md
|
[facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the DATA2VEC model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`Data2VecModel`].
|
181_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextconfig
|
.md
|
the `inputs_ids` passed when calling [`Data2VecModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
|
181_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextconfig
|
.md
|
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
|
181_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
181_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`Data2VecModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
|
181_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextconfig
|
.md
|
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
|
181_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextconfig
|
.md
|
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
is_decoder (`bool`, *optional*, defaults to `False`):
Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
use_cache (`bool`, *optional*, defaults to `True`):
|
181_7_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextconfig
|
.md
|
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
classifier_dropout (`float`, *optional*):
The dropout ratio for the classification head.
Examples:
```python
>>> from transformers import Data2VecTextConfig, Data2VecTextModel
|
181_7_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vectextconfig
|
.md
|
>>> # Initializing a Data2VecText facebook/data2vec-text-base style configuration
>>> configuration = Data2VecTextConfig()
>>> # Initializing a model (with random weights) from the facebook/data2vec-text-base style configuration
>>> model = Data2VecTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
181_7_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
This is the configuration class to store the configuration of a [`Data2VecAudioModel`]. It is used to instantiate
an Data2VecAudio model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Data2VecAudio
[facebook/data2vec-audio-base-960h](https://huggingface.co/facebook/data2vec-audio-base-960h) architecture.
|
181_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
[facebook/data2vec-audio-base-960h](https://huggingface.co/facebook/data2vec-audio-base-960h) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32):
Vocabulary size of the Data2VecAudio model. Defines the number of different tokens that can be represented
|
181_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
Vocabulary size of the Data2VecAudio model. Defines the number of different tokens that can be represented
by the `inputs_ids` passed when calling [`Data2VecAudioModel`] or [`TFData2VecAudioModel`]. Vocabulary size
of the model. Defines the different tokens that can be represented by the *inputs_ids* passed to the
forward method of [`Data2VecAudioModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
|
181_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
181_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
181_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for activations inside the fully connected layer.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
final_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the final projection layer of [`Data2VecAudioForCTC`].
|
181_8_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
The dropout probability for the final projection layer of [`Data2VecAudioForCTC`].
layerdrop (`float`, *optional*, defaults to 0.1):
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
|
181_8_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
feat_proj_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for output of the feature encoder.
feat_extract_activation (`str, `optional`, defaults to `"gelu"`):
The non-linear activation function (function or string) in the 1D convolutional layers of the feature
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
|
181_8_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`):
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`):
|
181_8_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`):
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*.
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
|
181_8_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of *conv_kernel* defines the number of convolutional layers and has to match the length of
*conv_dim*.
conv_bias (`bool`, *optional*, defaults to `False`):
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
|
181_8_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16):
Number of groups of 1D convolutional positional embeddings layer.
mask_time_prob (`float`, *optional*, defaults to 0.05):
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
|
181_8_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
mask_time_length (`int`, *optional*, defaults to 10):
|
181_8_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
mask_time_length (`int`, *optional*, defaults to 10):
Length of vector span along the time axis.
mask_time_min_masks (`int`, *optional*, defaults to 2),:
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks''
mask_feature_prob (`float`, *optional*, defaults to 0.0):
|
181_8_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
mask_time_min_masks''
mask_feature_prob (`float`, *optional*, defaults to 0.0):
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
|
181_8_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is
True`.
mask_feature_length (`int`, *optional*, defaults to 10):
Length of vector span along the feature axis.
mask_feature_min_masks (`int`, *optional*, defaults to 0),:
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
|
181_8_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
step, irrespectively of `mask_feature_prob`. Only relevant if
''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks''
ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
instance of [`Data2VecAudioForCTC`].
ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
|
181_8_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
instance of [`Data2VecAudioForCTC`].
ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of [`Data2VecAudioForCTC`].
use_weighted_layer_sum (`bool`, *optional*, defaults to `False`):
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
|
181_8_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of [`Data2VecAudioForSequenceClassification`].
classifier_proj_size (`int`, *optional*, defaults to 256):
Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`):
A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN*
|
181_8_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN*
module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers.
tdnn_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the
*XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*.
|
181_8_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
*XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*.
tdnn_dilation (`Tuple[int]` or `List[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`):
A tuple of integers defining the dilation factor of each 1D convolutional layer in *TDNN* module of the
*XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*.
xvector_output_dim (`int`, *optional*, defaults to 512):
Dimensionality of the *XVector* embedding vectors.
|
181_8_20
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
xvector_output_dim (`int`, *optional*, defaults to 512):
Dimensionality of the *XVector* embedding vectors.
add_adapter (`bool`, *optional*, defaults to `False`):
Whether a convolutional network should be stacked on top of the Data2VecAudio Encoder. Can be very useful
for warm-starting Data2VecAudio for SpeechEncoderDecoder models.
adapter_kernel_size (`int`, *optional*, defaults to 3):
Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
|
181_8_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
adapter_stride (`int`, *optional*, defaults to 2):
Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
num_adapter_layers (`int`, *optional*, defaults to 3):
Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is
True`.
output_hidden_size (`int`, *optional*):
|
181_8_22
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
True`.
output_hidden_size (`int`, *optional*):
Dimensionality of the encoder output layer. If not defined, this defaults to *hidden-size*. Only relevant
if `add_adapter is True`.
Example:
```python
>>> from transformers import Data2VecAudioConfig, Data2VecAudioModel
|
181_8_23
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/data2vec.md
|
https://huggingface.co/docs/transformers/en/model_doc/data2vec/#data2vecaudioconfig
|
.md
|
>>> # Initializing a Data2VecAudio facebook/data2vec-audio-base-960h style configuration
>>> configuration = Data2VecAudioConfig()
>>> # Initializing a model (with random weights) from the facebook/data2vec-audio-base-960h style configuration
>>> model = Data2VecAudioModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
181_8_24
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.