source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvlt
.md
<Tip warning={true}> This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2. You can do so by running the following command: `pip install -U transformers==4.40.2`. </Tip>
186_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#overview
.md
The TVLT model was proposed in [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156)
186_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#overview
.md
by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal (the first three authors contributed equally). The Textless Vision-Language Transformer (TVLT) is a model that uses raw visual and audio inputs for vision-and-language representation learning, without using text-specific modules such as tokenization or automatic speech recognition (ASR). It can perform various audiovisual and vision-language tasks like retrieval, question answering, etc. The abstract from the paper is the following:
186_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#overview
.md
*In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT
186_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#overview
.md
of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level
186_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#overview
.md
Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text.*
186_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#overview
.md
<p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/tvlt_architecture.png" alt="drawing" width="600"/> </p> <small> TVLT architecture. Taken from the <a href="[https://arxiv.org/abs/2102.03334](https://arxiv.org/abs/2209.14156)">original paper</a>. </small> The original code can be found [here](https://github.com/zinengtang/TVLT). This model was contributed by [Zineng Tang](https://huggingface.co/ZinengTang).
186_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#usage-tips
.md
- TVLT is a model that takes both `pixel_values` and `audio_values` as input. One can use [`TvltProcessor`] to prepare data for the model. This processor wraps an image processor (for the image/video modality) and an audio feature extractor (for the audio modality) into one.
186_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#usage-tips
.md
- TVLT is trained with images/videos and audios of various sizes: the authors resize and crop the input images/videos to 224 and limit the length of audio spectrogram to 2048. To make batching of videos and audios possible, the authors use a `pixel_mask` that indicates which pixels are real/padding and `audio_mask` that indicates which audio values are real/padding.
186_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#usage-tips
.md
- The design of TVLT is very similar to that of a standard Vision Transformer (ViT) and masked autoencoder (MAE) as in [ViTMAE](vitmae). The difference is that the model includes embedding layers for the audio modality. - The PyTorch version of this model is only available in torch 1.10 and higher.
186_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
This is the configuration class to store the configuration of a [`TvltModel`]. It is used to instantiate a TVLT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the TVLT [ZinengTang/tvlt-base](https://huggingface.co/ZinengTang/tvlt-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
186_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. spectrogram_length (`int`, *optional*, defaults to 2048): The time length of each audio spectrogram. frequency_length (`int`, *optional*, defaults to 128): The frequency length of audio spectrogram.
186_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
frequency_length (`int`, *optional*, defaults to 128): The frequency length of audio spectrogram. image_patch_size (`List[int]`, *optional*, defaults to `[16, 16]`): The size (resolution) of each image patch. audio_patch_size (`List[int]`, *optional*, defaults to `[16, 16]`): The size (resolution) of each audio patch. num_image_channels (`int`, *optional*, defaults to 3): The number of input image channels. num_audio_channels (`int`, *optional*, defaults to 1): The number of input audio channels.
186_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
The number of input image channels. num_audio_channels (`int`, *optional*, defaults to 1): The number of input audio channels. num_frames (`int`, *optional*, defaults to 8): The maximum number of frames for an input video. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
186_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
186_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02):
186_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. use_mean_pooling (`bool`, *optional*, defaults to `False`):
186_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
Whether to add a bias to the queries, keys and values. use_mean_pooling (`bool`, *optional*, defaults to `False`): Whether to mean pool the final hidden states instead of using the final hidden state of the [CLS] token. decoder_num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the decoder. decoder_hidden_size (`int`, *optional*, defaults to 512): Dimensionality of the decoder. decoder_num_hidden_layers (`int`, *optional*, defaults to 8):
186_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
Dimensionality of the decoder. decoder_num_hidden_layers (`int`, *optional*, defaults to 8): Number of hidden layers in the decoder. decoder_intermediate_size (`int`, *optional*, defaults to 2048): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the decoder. pixel_mask_ratio (`float`, *optional*, defaults to 0.75): Image patch masking ratio. audio_mask_ratio (`float`, *optional*, defaults to 0.15): Audio patch masking ratio.
186_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
Image patch masking ratio. audio_mask_ratio (`float`, *optional*, defaults to 0.15): Audio patch masking ratio. audio_mask_type (`str`, *optional*, defaults to `"frame-level"`): Audio patch masking type, choose between "frame-level" and "patch-level". task_matching (`bool`, *optional*, defaults to `True`): Whether to use vision audio matching task in pretraining. task_mae (`bool`, *optional*, defaults to `True`): Whether to use the masked auto-encoder (MAE) in pretraining.
186_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
task_mae (`bool`, *optional*, defaults to `True`): Whether to use the masked auto-encoder (MAE) in pretraining. loss_type (`str`, *optional*, defaults to `"classification"`): Loss types including regression and classification. Example: ```python >>> from transformers import TvltConfig, TvltModel
186_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltconfig
.md
>>> # # Initializing a TVLT ZinengTang/tvlt-base style configuration >>> configuration = TvltConfig() >>> # # Initializing a model (with random weights) from the ZinengTang/tvlt-base style configuration >>> model = TvltModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
186_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltprocessor
.md
Constructs a TVLT processor which wraps a TVLT image processor and TVLT feature extractor into a single processor. [`TvltProcessor`] offers all the functionalities of [`TvltImageProcessor`] and [`TvltFeatureExtractor`]. See the docstring of [`~TvltProcessor.__call__`] for more information. Args: image_processor (`TvltImageProcessor`): An instance of [`TvltImageProcessor`]. The image processor is a required input. feature_extractor (`TvltFeatureExtractor`):
186_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltprocessor
.md
An instance of [`TvltImageProcessor`]. The image processor is a required input. feature_extractor (`TvltFeatureExtractor`): An instance of [`TvltFeatureExtractor`]. The feature extractor is a required input. Methods: __call__
186_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltimageprocessor
.md
Constructs a TVLT image processor. This processor can be used to prepare either videos or images for the model by converting images to 1-frame videos. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`):
186_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltimageprocessor
.md
`do_resize` parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`): Size of the output image after resizing. The shortest edge of the image will be resized to `size["shortest_edge"]` while maintaining the aspect ratio of the original image. Can be overriden by `size` in the `preprocess` method. patch_size (`List[int]` *optional*, defaults to [16,16]): The patch size of image patch embedding. num_frames (`int` *optional*, defaults to 8):
186_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltimageprocessor
.md
The patch size of image patch embedding. num_frames (`int` *optional*, defaults to 8): The maximum number of video frames. resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`): Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the `preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the image to the specified `crop_size`. Can be overridden by the `do_center_crop`
186_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltimageprocessor
.md
Whether to center crop the image to the specified `crop_size`. Can be overridden by the `do_center_crop` parameter in the `preprocess` method. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 224, "width": 224}`): Size of the image after applying the center crop. Can be overridden by the `crop_size` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
186_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltimageprocessor
.md
`preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to 1/255): Defines the scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`):
186_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltimageprocessor
.md
in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
186_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltimageprocessor
.md
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Methods: preprocess
186_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltfeatureextractor
.md
Constructs a TVLT audio feature extractor. This feature extractor can be used to prepare audios for the model. This feature extractor inherits from [`FeatureExtractionMixin`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: spectrogram_length (`Dict[str, int]` *optional*, defaults to 2048): The time length of each audio spectrogram. num_channels (`int` *optional*, defaults to 1): Number of audio channels.
186_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltfeatureextractor
.md
The time length of each audio spectrogram. num_channels (`int` *optional*, defaults to 1): Number of audio channels. patch_size (`List[int]` *optional*, defaults to `[16, 16]`): The patch size of audio patch embedding. feature_size (`int`, *optional*, defaults to 128): The frequency length of audio spectrogram. sampling_rate (`int`, *optional*, defaults to 44100): The sampling rate at which the audio files should be digitalized expressed in Hertz (Hz).
186_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltfeatureextractor
.md
The sampling rate at which the audio files should be digitalized expressed in Hertz (Hz). hop_length_to_sampling_rate (`int`, *optional*, defaults to 86): Hop length is length of the overlaping windows for the STFT used to obtain the Mel Frequency coefficients. For example, with sampling rate 44100, the hop length is 512, with 44100 / 512 = 86 n_fft (`int`, *optional*, defaults to 2048): Size of the Fourier transform. padding_value (`float`, *optional*, defaults to 0.0):
186_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltfeatureextractor
.md
Size of the Fourier transform. padding_value (`float`, *optional*, defaults to 0.0): Padding value used to pad the audio. Should correspond to silences. Methods: __call__
186_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltmodel
.md
The bare TVLT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`TvltConfig`]): Model configuration class with all the parameters of the model.
186_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltmodel
.md
behavior. Parameters: config ([`TvltConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
186_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltforpretraining
.md
The TVLT Model transformer with the decoder on top for self-supervised pre-training. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`TvltConfig`]): Model configuration class with all the parameters of the model.
186_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltforpretraining
.md
behavior. Parameters: config ([`TvltConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
186_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltforaudiovisualclassification
.md
Tvlt Model transformer with a classifier head on top (an MLP on top of the final hidden state of the [CLS] token) for audiovisual classification tasks, e.g. CMU-MOSEI Sentiment Analysis and Audio to Video Retrieval. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
186_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvlt.md
https://huggingface.co/docs/transformers/en/model_doc/tvlt/#tvltforaudiovisualclassification
.md
behavior. Parameters: config ([`TvltConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
186_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
187_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
187_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#overview
.md
The Time Series Transformer model is a vanilla encoder-decoder Transformer for time series forecasting. This model was contributed by [kashif](https://huggingface.co/kashif).
187_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#usage-tips
.md
- Similar to other models in the library, [`TimeSeriesTransformerModel`] is the raw Transformer without any head on top, and [`TimeSeriesTransformerForPrediction`] adds a distribution head on top of the former, which can be used for time-series forecasting. Note that this is a so-called probabilistic forecasting model, not a point forecasting model. This means that the model learns a distribution, from which one can sample. The model doesn't directly output values.
187_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#usage-tips
.md
- [`TimeSeriesTransformerForPrediction`] consists of 2 blocks: an encoder, which takes a `context_length` of time series values as input (called `past_values`), and a decoder, which predicts a `prediction_length` of time series values into the future (called `future_values`). During training, one needs to provide pairs of (`past_values` and `future_values`) to the model.
187_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#usage-tips
.md
pairs of (`past_values` and `future_values`) to the model. - In addition to the raw (`past_values` and `future_values`), one typically provides additional features to the model. These can be the following: - `past_time_features`: temporal features which the model will add to `past_values`. These serve as "positional encodings" for the Transformer encoder. Examples are "day of the month", "month of the year", etc. as scalar values (and then stacked together as a vector).
187_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#usage-tips
.md
Examples are "day of the month", "month of the year", etc. as scalar values (and then stacked together as a vector). e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being "day of the month", 8 being "month of the year"). - `future_time_features`: temporal features which the model will add to `future_values`. These serve as "positional encodings" for the Transformer decoder.
187_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#usage-tips
.md
Examples are "day of the month", "month of the year", etc. as scalar values (and then stacked together as a vector). e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being "day of the month", 8 being "month of the year"). - `static_categorical_features`: categorical features which are static over time (i.e., have the same value for all `past_values` and `future_values`).
187_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#usage-tips
.md
An example here is the store ID or region ID that identifies a given time-series. Note that these features need to be known for ALL data points (also those in the future). - `static_real_features`: real-valued features which are static over time (i.e., have the same value for all `past_values` and `future_values`). An example here is the image representation of the product for which you have the time-series values (like the [ResNet](resnet) embedding of a "shoe" picture,
187_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#usage-tips
.md
if your time-series is about the sales of shoes). Note that these features need to be known for ALL data points (also those in the future). - The model is trained using "teacher-forcing", similar to how a Transformer is trained for machine translation. This means that, during training, one shifts the `future_values` one position to the right as input to the decoder, prepended by the last value of `past_values`. At each time step, the model needs to predict the
187_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#usage-tips
.md
next target. So the set-up of training is similar to a GPT model for language, except that there's no notion of `decoder_start_token_id` (we just use the last value of the context as initial input for the decoder). - At inference time, we give the final value of the `past_values` as input to the decoder. Next, we can sample from the model to make a prediction at the next time step, which is then fed to the decoder in order to make the next prediction (also called autoregressive generation).
187_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - Check out the Time Series Transformer blog-post in HuggingFace blog: [Probabilistic Time Series Forecasting with 🤗 Transformers](https://huggingface.co/blog/time-series-transformers)
187_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
This is the configuration class to store the configuration of a [`TimeSeriesTransformerModel`]. It is used to instantiate a Time Series Transformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Time Series Transformer [huggingface/time-series-transformer-tourism-monthly](https://huggingface.co/huggingface/time-series-transformer-tourism-monthly) architecture.
187_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
architecture. Configuration objects inherit from [`PretrainedConfig`] can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: prediction_length (`int`): The prediction length for the decoder. In other words, the prediction horizon of the model. This value is typically dictated by the dataset and we recommend to set it appropriately. context_length (`int`, *optional*, defaults to `prediction_length`):
187_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
context_length (`int`, *optional*, defaults to `prediction_length`): The context length for the encoder. If `None`, the context length will be the same as the `prediction_length`. distribution_output (`string`, *optional*, defaults to `"student_t"`): The distribution emission head for the model. Could be either "student_t", "normal" or "negative_binomial". loss (`string`, *optional*, defaults to `"nll"`): The loss function for the model corresponding to the `distribution_output` head. For parametric
187_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
The loss function for the model corresponding to the `distribution_output` head. For parametric distributions it is the negative log likelihood (nll) - which currently is the only supported one. input_size (`int`, *optional*, defaults to 1): The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of multivariate targets. scaling (`string` or `bool`, *optional* defaults to `"mean"`):
187_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
multivariate targets. scaling (`string` or `bool`, *optional* defaults to `"mean"`): Whether to scale the input targets via "mean" scaler, "std" scaler or no scaler if `None`. If `True`, the scaler is set to "mean". lags_sequence (`list[int]`, *optional*, defaults to `[1, 2, 3, 4, 5, 6, 7]`): The lags of the input time series as covariates often dictated by the frequency of the data. Default is `[1, 2, 3, 4, 5, 6, 7]` but we recommend to change it based on the dataset appropriately.
187_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
`[1, 2, 3, 4, 5, 6, 7]` but we recommend to change it based on the dataset appropriately. num_time_features (`int`, *optional*, defaults to 0): The number of time features in the input time series. num_dynamic_real_features (`int`, *optional*, defaults to 0): The number of dynamic real valued features. num_static_categorical_features (`int`, *optional*, defaults to 0): The number of static categorical features. num_static_real_features (`int`, *optional*, defaults to 0):
187_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
The number of static categorical features. num_static_real_features (`int`, *optional*, defaults to 0): The number of static real valued features. cardinality (`list[int]`, *optional*): The cardinality (number of different values) for each of the static categorical features. Should be a list of integers, having the same length as `num_static_categorical_features`. Cannot be `None` if `num_static_categorical_features` is > 0. embedding_dimension (`list[int]`, *optional*):
187_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
`num_static_categorical_features` is > 0. embedding_dimension (`list[int]`, *optional*): The dimension of the embedding for each of the static categorical features. Should be a list of integers, having the same length as `num_static_categorical_features`. Cannot be `None` if `num_static_categorical_features` is > 0. d_model (`int`, *optional*, defaults to 64): Dimensionality of the transformer layers. encoder_layers (`int`, *optional*, defaults to 2): Number of encoder layers.
187_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
Dimensionality of the transformer layers. encoder_layers (`int`, *optional*, defaults to 2): Number of encoder layers. decoder_layers (`int`, *optional*, defaults to 2): Number of decoder layers. encoder_attention_heads (`int`, *optional*, defaults to 2): Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (`int`, *optional*, defaults to 2): Number of attention heads for each attention layer in the Transformer decoder.
187_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
Number of attention heads for each attention layer in the Transformer decoder. encoder_ffn_dim (`int`, *optional*, defaults to 32): Dimension of the "intermediate" (often named feed-forward) layer in encoder. decoder_ffn_dim (`int`, *optional*, defaults to 32): Dimension of the "intermediate" (often named feed-forward) layer in decoder. activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
187_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and decoder. If string, `"gelu"` and `"relu"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the encoder, and decoder. encoder_layerdrop (`float`, *optional*, defaults to 0.1): The dropout probability for the attention and fully connected layers for each encoder layer.
187_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
The dropout probability for the attention and fully connected layers for each encoder layer. decoder_layerdrop (`float`, *optional*, defaults to 0.1): The dropout probability for the attention and fully connected layers for each decoder layer. attention_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.1): The dropout probability used between the two layers of the feed-forward networks.
187_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
The dropout probability used between the two layers of the feed-forward networks. num_parallel_samples (`int`, *optional*, defaults to 100): The number of samples to generate in parallel for each time step of inference. init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated normal weight initialization distribution. use_cache (`bool`, *optional*, defaults to `True`): Whether to use the past key/values attentions (if applicable to the model) to speed up decoding.
187_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
Whether to use the past key/values attentions (if applicable to the model) to speed up decoding. Example: ```python >>> from transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerModel
187_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerconfig
.md
>>> # Initializing a Time Series Transformer configuration with 12 time steps for prediction >>> configuration = TimeSeriesTransformerConfig(prediction_length=12) >>> # Randomly initializing a model (with random weights) from the configuration >>> model = TimeSeriesTransformerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
187_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformermodel
.md
The bare Time Series Transformer Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
187_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformermodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`TimeSeriesTransformerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
187_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformermodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
187_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerforprediction
.md
The Time Series Transformer Model with a distribution head on top for time-series forecasting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
187_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerforprediction
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`TimeSeriesTransformerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
187_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/time_series_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/time_series_transformer/#timeseriestransformerforprediction
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
187_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
188_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
188_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#overview
.md
The GPTNeo model was released in the [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) repository by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. It is a GPT2 like causal language model trained on the [Pile](https://pile.eleuther.ai/) dataset. The architecture is similar to GPT2 except that GPT Neo uses local attention in every other layer with a window size of 256 tokens. This model was contributed by [valhalla](https://huggingface.co/valhalla).
188_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#usage-example
.md
The `generate()` method can be used to generate text using GPT Neo model. ```python >>> from transformers import GPTNeoForCausalLM, GPT2Tokenizer >>> model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
188_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#usage-example
.md
>>> prompt = ( ... "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " ... "previously unexplored valley, in the Andes Mountains. Even more surprising to the " ... "researchers was the fact that the unicorns spoke perfect English." ... ) >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
188_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#usage-example
.md
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] ```
188_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#combining-gpt-neo-and-flash-attention-2
.md
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature, and make sure your hardware is compatible with Flash-Attention 2. More details are available [here](https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2) concerning the installation. Make sure as well to load your model in half-precision (e.g. `torch.float16`). To load and run a model using Flash Attention 2, refer to the snippet below: ```python >>> import torch
188_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#combining-gpt-neo-and-flash-attention-2
.md
To load and run a model using Flash Attention 2, refer to the snippet below: ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> device = "cuda" # the device to load the model onto
188_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#combining-gpt-neo-and-flash-attention-2
.md
>>> model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-2.7B", torch_dtype=torch.float16, attn_implementation="flash_attention_2") >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B") >>> prompt = "def hello_world():" >>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device) >>> model.to(device)
188_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#combining-gpt-neo-and-flash-attention-2
.md
>>> prompt = "def hello_world():" >>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device) >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "def hello_world():\n >>> run_script("hello.py")\n >>> exit(0)\n<|endoftext|>" ```
188_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#expected-speedups
.md
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `EleutherAI/gpt-neo-2.7B` checkpoint and the Flash Attention 2 version of the model. Note that for GPT-Neo it is not possible to train / run on very long context as the max [position embeddings](https://huggingface.co/EleutherAI/gpt-neo-2.7B/blob/main/config.json#L58 ) is limited to 2048 - but this is applicable to all gpt-neo models and not specific to FA-2
188_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#expected-speedups
.md
<div style="text-align: center"> <img src="https://user-images.githubusercontent.com/49240599/272241893-b1c66b75-3a48-4265-bc47-688448568b3d.png"> </div>
188_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling)
188_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
This is the configuration class to store the configuration of a [`GPTNeoModel`]. It is used to instantiate a GPT Neo model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPTNeo [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
188_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50257): Vocabulary size of the GPT Neo model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`GPTNeoModel`]. Vocabulary size of the model. Defines the different
188_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
`inputs_ids` passed when calling [`GPTNeoModel`]. Vocabulary size of the model. Defines the different tokens that can be represented by the *inputs_ids* passed to the forward method of [`GPTNeoModel`]. max_position_embeddings (`int`, *optional*, defaults to 2048): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). hidden_size (`int`, *optional*, defaults to 2048):
188_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
just in case (e.g., 512 or 1024 or 2048). hidden_size (`int`, *optional*, defaults to 2048): Dimensionality of the encoder layers and the pooler layer. num_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. attention_types (`List`, *optional*, defaults to `[[['global', 'local'], 12]]`): The type of attention for each layer in a `List` of the following format `[[["attention_type"],
188_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
The type of attention for each layer in a `List` of the following format `[[["attention_type"], num_layerss]]` e.g. for a 24 layer model `[[["global"], 24]]` or `[[["global", "local"], 12]]` Choose the value of `attention_type` from `["global", "local"]` num_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 8192):
188_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
intermediate_size (`int`, *optional*, defaults to 8192): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. window_size (`int`, *optional*, defaults to 256): The size of the sliding window for local attention. activation_function (`str` or `function`, *optional*, defaults to `"gelu_new"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
188_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
`"relu"`, `"selu"` and `"gelu_new"` are supported. resid_dropout (`float`, *optional*, defaults to 0.0): Residual dropout used in the attention pattern. embed_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. classifier_dropout (`float`, *optional*, defaults to 0.1):
188_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
The dropout ratio for the attention probabilities. classifier_dropout (`float`, *optional*, defaults to 0.1): Argument used when doing token classification, used in the model [`GPTNeoForTokenClassification`]. The dropout ratio for the hidden layer. layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. initializer_range (`float`, *optional*, defaults to 0.02):
188_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
The epsilon used by the layer normalization layers. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. bos_token_id (`int`, *optional*, defaults to 50256):
188_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
relevant if `config.is_decoder=True`. bos_token_id (`int`, *optional*, defaults to 50256): The id of the beginning of sentence token in the vocabulary. eos_token_id (`int`, *optional*, defaults to 50256): The id of the end of sentence token in the vocabulary. Example: ```python >>> from transformers import GPTNeoConfig, GPTNeoModel
188_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoconfig
.md
>>> # Initializing a GPTNeo EleutherAI/gpt-neo-1.3B style configuration >>> configuration = GPTNeoConfig() >>> # Initializing a model (with random weights) from the EleutherAI/gpt-neo-1.3B style configuration >>> model = GPTNeoModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
188_6_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneomodel
.md
The bare GPT Neo Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
188_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneomodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GPTNeoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
188_7_1