source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforsequenceclassification
.md
Example use of Perceiver for text classification. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
407_25_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforsequenceclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
407_25_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationlearned
.md
Example use of Perceiver for image classification, for tasks such as ImageNet. This model uses learned position embeddings. In other words, this model is not given any privileged information about the structure of images. As shown in the paper, this model can achieve a top-1 accuracy of 72.7 on ImageNet. [`PerceiverForImageClassificationLearned`] uses [`~models.perceiver.modeling_perceiver.PerceiverImagePreprocessor`] (with `prep_type="conv1x1"`) to preprocess the input images, and
407_26_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationlearned
.md
(with `prep_type="conv1x1"`) to preprocess the input images, and [`~models.perceiver.modeling_perceiver.PerceiverClassificationDecoder`] to decode the latent representation of [`PerceiverModel`] into classification logits. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
407_26_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationlearned
.md
behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
407_26_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationfourier
.md
Example use of Perceiver for image classification, for tasks such as ImageNet. This model uses fixed 2D Fourier position embeddings. As shown in the paper, this model can achieve a top-1 accuracy of 79.0 on ImageNet, and 84.5 when pre-trained on a large-scale dataset (i.e. JFT). [`PerceiverForImageClassificationLearned`] uses [`~models.perceiver.modeling_perceiver.PerceiverImagePreprocessor`] (with `prep_type="pixels"`) to preprocess the input images, and
407_27_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationfourier
.md
(with `prep_type="pixels"`) to preprocess the input images, and [`~models.perceiver.modeling_perceiver.PerceiverClassificationDecoder`] to decode the latent representation of [`PerceiverModel`] into classification logits. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
407_27_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationfourier
.md
behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
407_27_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationconvprocessing
.md
Example use of Perceiver for image classification, for tasks such as ImageNet. This model uses a 2D conv+maxpool preprocessing network. As shown in the paper, this model can achieve a top-1 accuracy of 82.1 on ImageNet. [`PerceiverForImageClassificationLearned`] uses [`~models.perceiver.modeling_perceiver.PerceiverImagePreprocessor`] (with `prep_type="conv"`) to preprocess the input images, and [`~models.perceiver.modeling_perceiver.PerceiverClassificationDecoder`] to decode the latent representation of
407_28_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationconvprocessing
.md
[`~models.perceiver.modeling_perceiver.PerceiverClassificationDecoder`] to decode the latent representation of [`PerceiverModel`] into classification logits. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model.
407_28_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationconvprocessing
.md
behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
407_28_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforopticalflow
.md
Example use of Perceiver for optical flow, for tasks such as Sintel and KITTI. [`PerceiverForOpticalFlow`] uses [`~models.perceiver.modeling_perceiver.PerceiverImagePreprocessor`] (with *prep_type="patches"*) to preprocess the input images, and [`~models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder`] to decode the latent representation of [`PerceiverModel`]. As input, one concatenates 2 subsequent frames along the channel dimension and extract a 3 x 3 patch around each pixel
407_29_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforopticalflow
.md
As input, one concatenates 2 subsequent frames along the channel dimension and extract a 3 x 3 patch around each pixel (leading to 3 x 3 x 3 x 2 = 54 values for each pixel). Fixed Fourier position encodings are used to encode the position of each pixel in the patch. Next, one applies the Perceiver encoder. To decode, one queries the latent representation using the same encoding used for the input.
407_29_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforopticalflow
.md
using the same encoding used for the input. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
407_29_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforopticalflow
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
407_29_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformultimodalautoencoding
.md
Example use of Perceiver for multimodal (video) autoencoding, for tasks such as Kinetics-700. [`PerceiverForMultimodalAutoencoding`] uses [`~models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor`] to preprocess the 3 modalities: images, audio and class labels. This preprocessor uses modality-specific preprocessors to preprocess every modality separately, after which they are concatenated. Trainable position embeddings are used to pad
407_30_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformultimodalautoencoding
.md
preprocess every modality separately, after which they are concatenated. Trainable position embeddings are used to pad each modality to the same number of channels to make concatenation along the time dimension possible. Next, one applies the Perceiver encoder. [`~models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder`] is used to decode the latent representation of [`PerceiverModel`]. This decoder uses each modality-specific decoder to construct queries. The decoder queries are
407_30_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformultimodalautoencoding
.md
[`PerceiverModel`]. This decoder uses each modality-specific decoder to construct queries. The decoder queries are created based on the inputs after preprocessing. However, autoencoding an entire video in a single forward pass is computationally infeasible, hence one only uses parts of the decoder queries to do cross-attention with the latent representation. This is determined by the subsampled indices for each modality, which can be provided as additional
407_30_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformultimodalautoencoding
.md
representation. This is determined by the subsampled indices for each modality, which can be provided as additional input to the forward pass of [`PerceiverForMultimodalAutoencoding`]. [`~models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder`] also pads the decoder queries of the different modalities to the same number of channels, in order to concatenate them along the time dimension. Next, cross-attention is performed with the latent representation of [`PerceiverModel`].
407_30_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformultimodalautoencoding
.md
is performed with the latent representation of [`PerceiverModel`]. Finally, [`~models.perceiver.modeling_perceiver.PerceiverMultiModalPostprocessor`] is used to turn this tensor into an actual video. It first splits up the output into the different modalities, and then applies the respective postprocessor for each modality. Note that, by masking the classification label during evaluation (i.e. simply providing a tensor of zeros for the
407_30_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformultimodalautoencoding
.md
Note that, by masking the classification label during evaluation (i.e. simply providing a tensor of zeros for the "label" modality), this auto-encoding model becomes a Kinetics 700 video classifier. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
407_30_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformultimodalautoencoding
.md
behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
407_30_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
408_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
408_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#overview
.md
The X-MOD model was proposed in [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. X-MOD extends multilingual masked language models like [XLM-R](xlm-roberta) to include language-specific modular components (_language adapters_) during pre-training. For fine-tuning, the language adapters in each transformer layer are frozen.
408_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#overview
.md
The abstract from the paper is the following:
408_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#overview
.md
*Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models
408_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#overview
.md
work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in
408_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#overview
.md
and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.*
408_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#overview
.md
This model was contributed by [jvamvas](https://huggingface.co/jvamvas). The original code can be found [here](https://github.com/facebookresearch/fairseq/tree/58cc6cca18f15e6d56e3f60c959fe4f878960a60/fairseq/models/xmod) and the original documentation is found [here](https://github.com/facebookresearch/fairseq/tree/58cc6cca18f15e6d56e3f60c959fe4f878960a60/examples/xmod).
408_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#usage-tips
.md
Tips: - X-MOD is similar to [XLM-R](xlm-roberta), but a difference is that the input language needs to be specified so that the correct language adapter can be activated. - The main models โ€“ base and large โ€“ have adapters for 81 languages.
408_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#input-language
.md
There are two ways to specify the input language: 1. By setting a default language before using the model: ```python from transformers import XmodModel model = XmodModel.from_pretrained("facebook/xmod-base") model.set_default_language("en_XX") ``` 2. By explicitly passing the index of the language adapter for each sample: ```python import torch
408_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#input-language
.md
input_ids = torch.tensor( [ [0, 581, 10269, 83, 99942, 136, 60742, 23, 70, 80583, 18276, 2], [0, 1310, 49083, 443, 269, 71, 5486, 165, 60429, 660, 23, 2], ] ) lang_ids = torch.LongTensor( [ 0, # en_XX 8, # de_DE ] ) output = model(input_ids, lang_ids=lang_ids) ```
408_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#fine-tuning
.md
The paper recommends that the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided: ```python model.freeze_embeddings_and_language_adapters() # Fine-tune the model ... ```
408_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#cross-lingual-transfer
.md
After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language: ```python model.set_default_language("de_DE") # Evaluate the model on German examples ... ```
408_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
408_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
This is the configuration class to store the configuration of a [`XmodModel`]. It is used to instantiate an X-MOD model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [facebook/xmod-base](https://huggingface.co/facebook/xmod-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
408_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the X-MOD model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`XmodModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer.
408_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
408_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
408_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2):
408_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`XmodModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
408_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
408_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`):
408_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. pre_norm (`bool`, *optional*, defaults to `False`): Whether to apply layer normalization before each block. adapter_reduction_factor (`int` or `float`, *optional*, defaults to 2):
408_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
Whether to apply layer normalization before each block. adapter_reduction_factor (`int` or `float`, *optional*, defaults to 2): The factor by which the dimensionality of the adapter is reduced relative to `hidden_size`. adapter_layer_norm (`bool`, *optional*, defaults to `False`): Whether to apply a new layer normalization before the adapter modules (shared across all adapters). adapter_reuse_layer_norm (`bool`, *optional*, defaults to `True`):
408_7_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
adapter_reuse_layer_norm (`bool`, *optional*, defaults to `True`): Whether to reuse the second layer normalization and apply it before the adapter modules as well. ln_before_adapter (`bool`, *optional*, defaults to `True`): Whether to apply the layer normalization before the residual connection around the adapter module. languages (`Iterable[str]`, *optional*, defaults to `["en_XX"]`): An iterable of language codes for which adapter modules should be initialized. default_language (`str`, *optional*):
408_7_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
An iterable of language codes for which adapter modules should be initialized. default_language (`str`, *optional*): Language code of a default language. It will be assumed that the input is in this language if no language codes are explicitly passed to the forward method. Examples: ```python >>> from transformers import XmodConfig, XmodModel
408_7_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
.md
>>> # Initializing an X-MOD facebook/xmod-base style configuration >>> configuration = XmodConfig() >>> # Initializing a model (with random weights) from the facebook/xmod-base style configuration >>> model = XmodModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
408_7_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodmodel
.md
The bare X-MOD Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
408_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
408_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodmodel
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in *Attention is
408_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodmodel
.md
cross-attention is added between the self-attention layers, following the architecture described in *Attention is all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
408_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodmodel
.md
to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. .. _*Attention is all you need*: https://arxiv.org/abs/1706.03762 Methods: forward
408_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforcausallm
.md
X-MOD Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
408_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforcausallm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
408_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforcausallm
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
408_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodformaskedlm
.md
X-MOD Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
408_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodformaskedlm
.md
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
408_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforsequenceclassification
.md
X-MOD Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
408_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
408_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforsequenceclassification
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
408_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodformultiplechoice
.md
X-MOD Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
408_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodformultiplechoice
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
408_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodformultiplechoice
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
408_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodfortokenclassification
.md
X-MOD Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
408_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodfortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
408_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodfortokenclassification
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
408_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforquestionanswering
.md
X-MOD Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
408_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the
408_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforquestionanswering
.md
and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
408_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
409_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
409_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#distilbert
.md
<div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=distilbert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-distilbert-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/distilbert-base-uncased"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> <a href="https://huggingface.co/papers/1910.01108"> <img alt="Paper page" src="https://img.shields.io/badge/Paper%20page-1910.01108-green">
409_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#distilbert
.md
<img alt="Paper page" src="https://img.shields.io/badge/Paper%20page-1910.01108-green"> </a> </div>
409_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#overview
.md
The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than
409_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#overview
.md
small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *google-bert/bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. The abstract from the paper is the following: *As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP),
409_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#overview
.md
*As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger
409_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#overview
.md
model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive
409_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#overview
.md
40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pretraining, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.*
409_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#overview
.md
demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.* This model was contributed by [victorsanh](https://huggingface.co/victorsanh). This model jax version was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation).
409_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#usage-tips
.md
- DistilBERT doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `[SEP]`). - DistilBERT doesn't have options to select the input positions (`position_ids` input). This could be added if necessary though, just let us know if you need this option.
409_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#usage-tips
.md
necessary though, just let us know if you need this option. - Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning itโ€™s been trained to predict the same probabilities as the larger model. The actual objective is a combination of: * finding the same probabilities as the teacher model * predicting the masked tokens correctly (but no next-sentence objective) * a cosine similarity between the hidden states of the student and the teacher model
409_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
409_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import DistilBertModel model = DistilBertModel.from_pretrained("distilbert-base-uncased", torch_dtype=torch.float16, attn_implementation="sdpa") ```
409_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#using-scaled-dot-product-attention-sdpa
.md
model = DistilBertModel.from_pretrained("distilbert-base-uncased", torch_dtype=torch.float16, attn_implementation="sdpa") ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (NVIDIA GeForce RTX 2060-8GB, PyTorch 2.3.1, OS Ubuntu 20.04) with `float16` and the `distilbert-base-uncased` model with a MaskedLM head, we saw the following speedups during training and inference.
409_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#training
.md
| num_training_steps | batch_size | seq_len | is cuda | Time per batch (eager - s) | Time per batch (sdpa - s) | Speedup (%) | Eager peak mem (MB) | sdpa peak mem (MB) | Mem saving (%) | |--------------------|------------|---------|---------|----------------------------|---------------------------|-------------|---------------------|--------------------|----------------|
409_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#training
.md
| 100 | 1 | 128 | False | 0.010 | 0.008 | 28.870 | 397.038 | 399.629 | -0.649 | | 100 | 1 | 256 | False | 0.011 | 0.009 | 20.681 | 412.505 | 412.606 | -0.025 |
409_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#training
.md
| 100 | 2 | 128 | False | 0.011 | 0.009 | 23.741 | 412.213 | 412.606 | -0.095 | | 100 | 2 | 256 | False | 0.015 | 0.013 | 16.502 | 427.491 | 425.787 | 0.400 |
409_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#training
.md
| 100 | 4 | 128 | False | 0.015 | 0.013 | 13.828 | 427.491 | 425.787 | 0.400 | | 100 | 4 | 256 | False | 0.025 | 0.022 | 12.882 | 594.156 | 502.745 | 18.182 |
409_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#training
.md
| 100 | 8 | 128 | False | 0.023 | 0.022 | 8.010 | 545.922 | 502.745 | 8.588 | | 100 | 8 | 256 | False | 0.046 | 0.041 | 12.763 | 983.450 | 798.480 | 23.165 |
409_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#inference
.md
| num_batches | batch_size | seq_len | is cuda | is half | use mask | Per token latency eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem eager (MB) | Mem BT (MB) | Mem saved (%) | |-------------|------------|---------|---------|---------|----------|-----------------------------|-----------------------------|-------------|----------------|--------------|---------------|
409_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#inference
.md
| 50 | 2 | 64 | True | True | True | 0.032 | 0.025 | 28.192 | 154.532 | 155.531 | -0.642 | | 50 | 2 | 128 | True | True | True | 0.033 | 0.025 | 32.636 | 157.286 | 157.482 | -0.125 |
409_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#inference
.md
| 50 | 4 | 64 | True | True | True | 0.032 | 0.026 | 24.783 | 157.023 | 157.449 | -0.271 | | 50 | 4 | 128 | True | True | True | 0.034 | 0.028 | 19.299 | 162.794 | 162.269 | 0.323 |
409_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#inference
.md
| 50 | 8 | 64 | True | True | True | 0.035 | 0.028 | 25.105 | 160.958 | 162.204 | -0.768 | | 50 | 8 | 128 | True | True | True | 0.052 | 0.046 | 12.375 | 173.155 | 171.844 | 0.763 |
409_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#inference
.md
| 50 | 16 | 64 | True | True | True | 0.051 | 0.045 | 12.882 | 172.106 | 171.713 | 0.229 | | 50 | 16 | 128 | True | True | True | 0.096 | 0.081 | 18.524 | 191.257 | 191.517 | -0.136 |
409_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#resources
.md
A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with DistilBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/>
409_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#resources
.md
<PipelineTag pipeline="text-classification"/> - A blog post on [Getting Started with Sentiment Analysis using Python](https://huggingface.co/blog/sentiment-analysis-python) with DistilBERT. - A blog post on how to [train DistilBERT with Blurr for sequence classification](https://huggingface.co/blog/fastai). - A blog post on how to use [Ray to tune DistilBERT hyperparameters](https://huggingface.co/blog/ray-tune).
409_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#resources
.md
- A blog post on how to use [Ray to tune DistilBERT hyperparameters](https://huggingface.co/blog/ray-tune). - A blog post on how to [train DistilBERT with Hugging Face and Amazon SageMaker](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face). - A notebook on how to [finetune DistilBERT for multi-label classification](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb). ๐ŸŒŽ
409_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/distilbert.md
https://huggingface.co/docs/transformers/en/model_doc/distilbert/#resources
.md
- A notebook on how to [finetune DistilBERT for multiclass classification with PyTorch](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb). ๐ŸŒŽ - A notebook on how to [finetune DistilBERT for text classification in TensorFlow](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb). ๐ŸŒŽ
409_7_3