source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
Mask Predictions from each layer in the transformer decoder.
attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True` is passed):
Tuple of `tuple(torch.FloatTensor)` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`. Self attentions weights from transformer decoder.
models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput
Class for outputs of [`Mask2FormerForUniversalSegmentationOutput`].
|
226_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
Class for outputs of [`Mask2FormerForUniversalSegmentationOutput`].
This output can be directly passed to [`~Mask2FormerImageProcessor.post_process_semantic_segmentation`] or
[`~Mask2FormerImageProcessor.post_process_instance_segmentation`] or
[`~Mask2FormerImageProcessor.post_process_panoptic_segmentation`] to compute final segmentation maps. Please, see
[`~Mask2FormerImageProcessor] for details regarding usage.
Args:
loss (`torch.Tensor`, *optional*):
|
226_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
[`~Mask2FormerImageProcessor] for details regarding usage.
Args:
loss (`torch.Tensor`, *optional*):
The computed loss, returned when labels are present.
class_queries_logits (`torch.FloatTensor`):
A tensor of shape `(batch_size, num_queries, num_labels + 1)` representing the proposed classes for each
query. Note the `+ 1` is needed because we incorporate the null class.
masks_queries_logits (`torch.FloatTensor`):
|
226_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
query. Note the `+ 1` is needed because we incorporate the null class.
masks_queries_logits (`torch.FloatTensor`):
A tensor of shape `(batch_size, num_queries, height, width)` representing the proposed masks for each
query.
auxiliary_logits (`List[Dict(str, torch.FloatTensor)]`, *optional*):
List of class and mask predictions from each layer of the transformer decoder.
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
|
226_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Last hidden states (final feature map) of the last stage of the encoder model (backbone).
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
|
226_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Last hidden states (final feature map) of the last stage of the pixel decoder model.
|
226_5_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
Last hidden states (final feature map) of the last stage of the pixel decoder model.
pixel_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
|
226_5_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
decoder model at the output of each stage.
transformer_decoder_last_hidden_state (`tuple(torch.FloatTensor)`):
Final output of the transformer decoder `(batch_size, sequence_length, hidden_size)`.
transformer_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
|
226_5_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of
shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
226_5_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#maskformer-specific-outputs
|
.md
|
Tuple of `tuple(torch.FloatTensor)` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`. Self and Cross Attentions weights from transformer decoder.
|
226_5_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formermodel
|
.md
|
The bare Mask2Former Model outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Mask2FormerConfig`]): Model configuration class with all the parameters of the model.
|
226_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formermodel
|
.md
|
behavior.
Parameters:
config ([`Mask2FormerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
226_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerforuniversalsegmentation
|
.md
|
The Mask2Former Model with heads on top for instance/semantic/panoptic segmentation.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Mask2FormerConfig`]): Model configuration class with all the parameters of the model.
|
226_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerforuniversalsegmentation
|
.md
|
behavior.
Parameters:
config ([`Mask2FormerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
226_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerimageprocessor
|
.md
|
Constructs a Mask2Former image processor. The image processor can be used to prepare image(s) and optional targets
for the model.
This image processor inherits from [`BaseImageProcessor`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the input to a certain `size`.
size (`int`, *optional*, defaults to 800):
|
226_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerimageprocessor
|
.md
|
Whether to resize the input to a certain `size`.
size (`int`, *optional*, defaults to 800):
Resize the input to the given size. Only has an effect if `do_resize` is set to `True`. If size is a
sequence like `(width, height)`, output size will be matched to this. If size is an int, smaller edge of
the image will be matched to this number. i.e, if `height > width`, then image will be rescaled to `(size *
height / width, size)`.
size_divisor (`int`, *optional*, defaults to 32):
|
226_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerimageprocessor
|
.md
|
height / width, size)`.
size_divisor (`int`, *optional*, defaults to 32):
Some backbones need images divisible by a certain number. If not passed, it defaults to the value used in
Swin Transformer.
resample (`int`, *optional*, defaults to `Resampling.BILINEAR`):
An optional resampling filter. This can be one of `PIL.Image.Resampling.NEAREST`,
`PIL.Image.Resampling.BOX`, `PIL.Image.Resampling.BILINEAR`, `PIL.Image.Resampling.HAMMING`,
|
226_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerimageprocessor
|
.md
|
`PIL.Image.Resampling.BOX`, `PIL.Image.Resampling.BILINEAR`, `PIL.Image.Resampling.HAMMING`,
`PIL.Image.Resampling.BICUBIC` or `PIL.Image.Resampling.LANCZOS`. Only has an effect if `do_resize` is set
to `True`.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the input to a certain `scale`.
rescale_factor (`float`, *optional*, defaults to `1/ 255`):
Rescale the input by the given factor. Only has an effect if `do_rescale` is set to `True`.
|
226_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerimageprocessor
|
.md
|
Rescale the input by the given factor. Only has an effect if `do_rescale` is set to `True`.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether or not to normalize the input with mean and standard deviation.
image_mean (`int`, *optional*, defaults to `[0.485, 0.456, 0.406]`):
The sequence of means for each channel, to be used when normalizing images. Defaults to the ImageNet mean.
image_std (`int`, *optional*, defaults to `[0.229, 0.224, 0.225]`):
|
226_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerimageprocessor
|
.md
|
image_std (`int`, *optional*, defaults to `[0.229, 0.224, 0.225]`):
The sequence of standard deviations for each channel, to be used when normalizing images. Defaults to the
ImageNet std.
ignore_index (`int`, *optional*):
Label to be assigned to background pixels in segmentation maps. If provided, segmentation map pixels
denoted with 0 (background) will be replaced with `ignore_index`.
do_reduce_labels (`bool`, *optional*, defaults to `False`):
|
226_8_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerimageprocessor
|
.md
|
denoted with 0 (background) will be replaced with `ignore_index`.
do_reduce_labels (`bool`, *optional*, defaults to `False`):
Whether or not to decrement all label values of segmentation maps by 1. Usually used for datasets where 0
is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k).
The background label will be replaced by `ignore_index`.
num_labels (`int`, *optional*):
The number of labels in the segmentation map.
Methods: preprocess
- encode_inputs
|
226_8_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mask2former.md
|
https://huggingface.co/docs/transformers/en/model_doc/mask2former/#mask2formerimageprocessor
|
.md
|
num_labels (`int`, *optional*):
The number of labels in the segmentation map.
Methods: preprocess
- encode_inputs
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation
|
226_8_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/
|
.md
|
<!--Copyright 2023 IBM and HuggingFace Inc. team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
227_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/
|
.md
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
227_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#overview
|
.md
|
The PatchTSMixer model was proposed in [TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting](https://arxiv.org/pdf/2306.09364.pdf) by Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong and Jayant Kalagnanam.
|
227_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#overview
|
.md
|
PatchTSMixer is a lightweight time-series modeling approach based on the MLP-Mixer architecture. In this HuggingFace implementation, we provide PatchTSMixer's capabilities to effortlessly facilitate lightweight mixing across patches, channels, and hidden features for effective multivariate time-series modeling. It also supports various attention mechanisms starting from simple gated attention to more complex self-attention blocks that can be customized accordingly. The model can be pretrained and
|
227_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#overview
|
.md
|
gated attention to more complex self-attention blocks that can be customized accordingly. The model can be pretrained and subsequently used for various downstream tasks such as forecasting, classification and regression.
|
227_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#overview
|
.md
|
The abstract from the paper is the following:
|
227_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#overview
|
.md
|
*TSMixer is a lightweight neural architecture exclusively composed of multi-layer perceptron (MLP) modules designed for multivariate forecasting and representation learning on patched time series. Our model draws inspiration from the success of MLP-Mixer models in computer vision. We demonstrate the challenges involved in adapting Vision MLP-Mixer for time series and introduce empirically validated components to enhance accuracy. This includes a novel design paradigm of attaching online reconciliation
|
227_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#overview
|
.md
|
empirically validated components to enhance accuracy. This includes a novel design paradigm of attaching online reconciliation heads to the MLP-Mixer backbone, for explicitly modeling the time-series properties such as hierarchy and channel-correlations. We also propose a Hybrid channel modeling approach to effectively handle noisy channel interactions and generalization across diverse datasets, a common challenge in existing patch channel-mixing methods. Additionally, a simple gated attention mechanism is
|
227_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#overview
|
.md
|
datasets, a common challenge in existing patch channel-mixing methods. Additionally, a simple gated attention mechanism is introduced in the backbone to prioritize important features. By incorporating these lightweight components, we significantly enhance the learning capability of simple MLP structures, outperforming complex Transformer models with minimal computing usage. Moreover, TSMixer's modular design enables compatibility with both supervised and masked self-supervised learning methods, making it a
|
227_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#overview
|
.md
|
TSMixer's modular design enables compatibility with both supervised and masked self-supervised learning methods, making it a promising building block for time-series Foundation Models. TSMixer outperforms state-of-the-art MLP and Transformer models in forecasting by a considerable margin of 8-60%. It also outperforms the latest strong benchmarks of Patch-Transformer models (by 1-2%) with a significant reduction in memory and runtime (2-3X).*
|
227_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#overview
|
.md
|
This model was contributed by [ajati](https://huggingface.co/ajati), [vijaye12](https://huggingface.co/vijaye12),
[gsinthong](https://huggingface.co/gsinthong), [namctin](https://huggingface.co/namctin),
[wmgifford](https://huggingface.co/wmgifford), [kashif](https://huggingface.co/kashif).
|
227_1_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#usage-example
|
.md
|
The code snippet below shows how to randomly initialize a PatchTSMixer model. The model is compatible with the [Trainer API](../trainer.md).
```python
from transformers import PatchTSMixerConfig, PatchTSMixerForPrediction
from transformers import Trainer, TrainingArguments,
|
227_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#usage-example
|
.md
|
from transformers import PatchTSMixerConfig, PatchTSMixerForPrediction
from transformers import Trainer, TrainingArguments,
config = PatchTSMixerConfig(context_length = 512, prediction_length = 96)
model = PatchTSMixerForPrediction(config)
trainer = Trainer(model=model, args=training_args,
train_dataset=train_dataset,
eval_dataset=valid_dataset)
trainer.train()
results = trainer.evaluate(test_dataset)
```
|
227_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#usage-tips
|
.md
|
The model can also be used for time series classification and time series regression. See the respective [`PatchTSMixerForTimeSeriesClassification`] and [`PatchTSMixerForRegression`] classes.
|
227_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#resources
|
.md
|
- A blog post explaining PatchTSMixer in depth can be found [here](https://huggingface.co/blog/patchtsmixer). The blog can also be opened in Google Colab.
|
227_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
This is the configuration class to store the configuration of a [`PatchTSMixerModel`]. It is used to instantiate a
PatchTSMixer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the PatchTSMixer
[ibm/patchtsmixer-etth1-pretrain](https://huggingface.co/ibm/patchtsmixer-etth1-pretrain) architecture.
|
227_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
[ibm/patchtsmixer-etth1-pretrain](https://huggingface.co/ibm/patchtsmixer-etth1-pretrain) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
context_length (`int`, *optional*, defaults to 32):
The context/history length for the input sequence.
patch_length (`int`, *optional*, defaults to 8):
The patch length for the input sequence.
|
227_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
patch_length (`int`, *optional*, defaults to 8):
The patch length for the input sequence.
num_input_channels (`int`, *optional*, defaults to 1):
Number of input variates. For Univariate, set it to 1.
patch_stride (`int`, *optional*, defaults to 8):
Determines the overlap between two consecutive patches. Set it to patch_length (or greater), if we want
non-overlapping patches.
num_parallel_samples (`int`, *optional*, defaults to 100):
The number of samples to generate in parallel for probabilistic forecast.
|
227_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
The number of samples to generate in parallel for probabilistic forecast.
d_model (`int`, *optional*, defaults to 8):
Hidden dimension of the model. Recommended to set it as a multiple of patch_length (i.e. 2-5X of
patch_length). Larger value indicates more complex model.
expansion_factor (`int`, *optional*, defaults to 2):
Expansion factor to use inside MLP. Recommended range is 2-5. Larger value indicates more complex model.
num_layers (`int`, *optional*, defaults to 3):
|
227_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
num_layers (`int`, *optional*, defaults to 3):
Number of layers to use. Recommended range is 3-15. Larger value indicates more complex model.
dropout (`float`, *optional*, defaults to 0.2):
The dropout probability the `PatchTSMixer` backbone. Recommended range is 0.2-0.7
mode (`str`, *optional*, defaults to `"common_channel"`):
Mixer Mode. Determines how to process the channels. Allowed values: "common_channel", "mix_channel". In
|
227_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
Mixer Mode. Determines how to process the channels. Allowed values: "common_channel", "mix_channel". In
"common_channel" mode, we follow Channel-independent modelling with no explicit channel-mixing. Channel
mixing happens in an implicit manner via shared weights across channels. (preferred first approach) In
"mix_channel" mode, we follow explicit channel-mixing in addition to patch and feature mixer. (preferred
approach when channel correlations are very important to model)
|
227_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
approach when channel correlations are very important to model)
gated_attn (`bool`, *optional*, defaults to `True`):
Enable Gated Attention.
norm_mlp (`str`, *optional*, defaults to `"LayerNorm"`):
Normalization layer (BatchNorm or LayerNorm).
self_attn (`bool`, *optional*, defaults to `False`):
Enable Tiny self attention across patches. This can be enabled when the output of Vanilla PatchTSMixer with
gated attention is not satisfactory. Enabling this leads to explicit pair-wise attention and modelling
|
227_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
gated attention is not satisfactory. Enabling this leads to explicit pair-wise attention and modelling
across patches.
self_attn_heads (`int`, *optional*, defaults to 1):
Number of self-attention heads. Works only when `self_attn` is set to `True`.
use_positional_encoding (`bool`, *optional*, defaults to `False`):
Enable the use of positional embedding for the tiny self-attention layers. Works only when `self_attn` is
set to `True`.
positional_encoding_type (`str`, *optional*, defaults to `"sincos"`):
|
227_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
set to `True`.
positional_encoding_type (`str`, *optional*, defaults to `"sincos"`):
Positional encodings. Options `"random"` and `"sincos"` are supported. Works only when
`use_positional_encoding` is set to `True`
scaling (`string` or `bool`, *optional*, defaults to `"std"`):
Whether to scale the input targets via "mean" scaler, "std" scaler or no scaler if `None`. If `True`, the
scaler is set to "mean".
loss (`string`, *optional*, defaults to `"mse"`):
|
227_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
scaler is set to "mean".
loss (`string`, *optional*, defaults to `"mse"`):
The loss function for the model corresponding to the `distribution_output` head. For parametric
distributions it is the negative log likelihood ("nll") and for point estimates it is the mean squared
error "mse".
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated normal weight initialization distribution.
post_init (`bool`, *optional*, defaults to `False`):
|
227_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
post_init (`bool`, *optional*, defaults to `False`):
Whether to use custom weight initialization from `transformers` library, or the default initialization in
`PyTorch`. Setting it to `False` performs `PyTorch` weight initialization.
norm_eps (`float`, *optional*, defaults to 1e-05):
A value added to the denominator for numerical stability of normalization.
mask_type (`str`, *optional*, defaults to `"random"`):
|
227_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
mask_type (`str`, *optional*, defaults to `"random"`):
Type of masking to use for Masked Pretraining mode. Allowed values are "random", "forecast". In Random
masking, points are masked randomly. In Forecast masking, points are masked towards the end.
random_mask_ratio (`float`, *optional*, defaults to 0.5):
Masking ratio to use when `mask_type` is `random`. Higher value indicates more masking.
num_forecast_mask_patches (`int` or `list`, *optional*, defaults to `[2]`):
|
227_5_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
num_forecast_mask_patches (`int` or `list`, *optional*, defaults to `[2]`):
Number of patches to be masked at the end of each batch sample. If it is an integer, all the samples in the
batch will have the same number of masked patches. If it is a list, samples in the batch will be randomly
masked by numbers defined in the list. This argument is only used for forecast pretraining.
mask_value (`float`, *optional*, defaults to `0.0`):
Mask value to use.
masked_loss (`bool`, *optional*, defaults to `True`):
|
227_5_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
mask_value (`float`, *optional*, defaults to `0.0`):
Mask value to use.
masked_loss (`bool`, *optional*, defaults to `True`):
Whether to compute pretraining loss only at the masked portions, or on the entire output.
channel_consistent_masking (`bool`, *optional*, defaults to `True`):
When true, masking will be same across all channels of a timeseries. Otherwise, masking positions will vary
across channels.
unmasked_channel_indices (`list`, *optional*):
Channels that are not masked during pretraining.
|
227_5_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
across channels.
unmasked_channel_indices (`list`, *optional*):
Channels that are not masked during pretraining.
head_dropout (`float`, *optional*, defaults to 0.2):
The dropout probability the `PatchTSMixer` head.
distribution_output (`string`, *optional*, defaults to `"student_t"`):
The distribution emission head for the model when loss is "nll". Could be either "student_t", "normal" or
"negative_binomial".
prediction_length (`int`, *optional*, defaults to 16):
|
227_5_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
"negative_binomial".
prediction_length (`int`, *optional*, defaults to 16):
Number of time steps to forecast for a forecasting task. Also known as the Forecast Horizon.
prediction_channel_indices (`list`, *optional*):
List of channel indices to forecast. If None, forecast all channels. Target data is expected to have all
channels and we explicitly filter the channels in prediction and target before loss computation.
num_targets (`int`, *optional*, defaults to 3):
|
227_5_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
num_targets (`int`, *optional*, defaults to 3):
Number of targets (dimensionality of the regressed variable) for a regression task.
output_range (`list`, *optional*):
Output range to restrict for the regression task. Defaults to None.
head_aggregation (`str`, *optional*, defaults to `"max_pool"`):
Aggregation mode to enable for classification or regression task. Allowed values are `None`, "use_last",
"max_pool", "avg_pool".
Example:
```python
|
227_5_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
"max_pool", "avg_pool".
Example:
```python
>>> from transformers import PatchTSMixerConfig, PatchTSMixerModel
|
227_5_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerconfig
|
.md
|
>>> # Initializing a default PatchTSMixer configuration
>>> configuration = PatchTSMixerConfig()
>>> # Randomly initializing a model (with random weights) from the configuration
>>> model = PatchTSMixerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
227_5_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixermodel
|
.md
|
The PatchTSMixer Model for time-series forecasting.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
227_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixermodel
|
.md
|
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`PatchTSMixerConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
mask_input (`bool`, *optional*, defaults to `False`):
|
227_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixermodel
|
.md
|
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
mask_input (`bool`, *optional*, defaults to `False`):
If True, Masking will be enabled. False otherwise.
Methods: forward
|
227_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerforprediction
|
.md
|
`PatchTSMixer` for forecasting application.
Args:
config (`PatchTSMixerConfig`):
Configuration.
Returns:
`None`.
Methods: forward
|
227_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerfortimeseriesclassification
|
.md
|
`PatchTSMixer` for classification application.
Args:
config (`PatchTSMixerConfig`):
Configuration.
Returns:
`None`.
Methods: forward
|
227_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerforpretraining
|
.md
|
`PatchTSMixer` for mask pretraining.
Args:
config (`PatchTSMixerConfig`):
Configuration.
Returns:
`None`.
Methods: forward
|
227_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtsmixer.md
|
https://huggingface.co/docs/transformers/en/model_doc/patchtsmixer/#patchtsmixerforregression
|
.md
|
`PatchTSMixer` for regression application.
Args:
config (`PatchTSMixerConfig`):
Configuration.
Returns:
`None`.
Methods: forward
|
227_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
228_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
228_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#overview
|
.md
|
The GPTBigCode model was proposed in [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by BigCode. The listed authors are: Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del
|
228_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#overview
|
.md
|
Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
|
228_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#overview
|
.md
|
The abstract from the paper is the following:
|
228_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#overview
|
.md
|
*The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code. This tech report describes the progress of the collaboration until December 2022, outlining the current state of the Personally Identifiable Information (PII) redaction pipeline, the experiments conducted to de-risk the model architecture, and the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java,
|
228_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#overview
|
.md
|
the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java, JavaScript, and Python subsets of The Stack and evaluate them on the MultiPL-E text-to-code benchmark. We find that more aggressive filtering of near-duplicates can further boost performance and, surprisingly, that selecting files from repositories with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code
|
228_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#overview
|
.md
|
with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code generation models (InCoder-6.7B and CodeGen-Multi-2.7B) in both left-to-right generation and infilling on the Java, JavaScript, and Python portions of MultiPL-E, despite being a substantially smaller model. All models are released under an OpenRAIL license at [this https URL.](https://huggingface.co/bigcode)*
|
228_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#overview
|
.md
|
The model is an optimized [GPT2 model](https://huggingface.co/docs/transformers/model_doc/gpt2) with support for Multi-Query Attention.
|
228_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#implementation-details
|
.md
|
The main differences compared to GPT2.
- Added support for Multi-Query Attention.
- Use `gelu_pytorch_tanh` instead of classic `gelu`.
- Avoid unnecessary synchronizations (this has since been added to GPT2 in #20061, but wasn't in the reference codebase).
- Use Linear layers instead of Conv1D (good speedup but makes the checkpoints incompatible).
- Merge `_attn` and `_upcast_and_reordered_attn`. Always merge the matmul with scaling. Rename `reorder_and_upcast_attn`->`attention_softmax_in_fp32`
|
228_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#implementation-details
|
.md
|
- Cache the attention mask value to avoid recreating it every time.
- Use jit to fuse the attention fp32 casting, masking, softmax, and scaling.
- Combine the attention and causal masks into a single one, pre-computed for the whole model instead of every layer.
- Merge the key and value caches into one (this changes the format of layer_past/ present, does it risk creating problems?)
|
228_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#implementation-details
|
.md
|
- Merge the key and value caches into one (this changes the format of layer_past/ present, does it risk creating problems?)
- Use the memory layout (self.num_heads, 3, self.head_dim) instead of `(3, self.num_heads, self.head_dim)` for the QKV tensor with MHA. (prevents an overhead with the merged key and values, but makes the checkpoints incompatible with the original openai-community/gpt2 model).
|
228_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#implementation-details
|
.md
|
You can read more about the optimizations in the [original pull request](https://github.com/huggingface/transformers/pull/22575)
|
228_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#combining-starcoder-and-flash-attention-2
|
.md
|
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
```bash
pip install -U flash-attn --no-build-isolation
```
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
To load and run a model using Flash Attention 2, refer to the snippet below:
```python
|
228_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#combining-starcoder-and-flash-attention-2
|
.md
|
To load and run a model using Flash Attention 2, refer to the snippet below:
```python
>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> device = "cuda" # the device to load the model onto
|
228_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#combining-starcoder-and-flash-attention-2
|
.md
|
>>> model = AutoModelForCausalLM.from_pretrained("bigcode/gpt_bigcode-santacoder", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
>>> tokenizer = AutoTokenizer.from_pretrained("bigcode/gpt_bigcode-santacoder")
>>> prompt = "def hello_world():"
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
>>> model.to(device)
|
228_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#combining-starcoder-and-flash-attention-2
|
.md
|
>>> prompt = "def hello_world():"
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
>>> model.to(device)
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
>>> tokenizer.batch_decode(generated_ids)[0]
'def hello_world():\n print("hello world")\n\nif __name__ == "__main__":\n print("hello world")\n<|endoftext|>'
```
|
228_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#expected-speedups
|
.md
|
Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using `bigcode/starcoder` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths.
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/starcoder-speedup.png">
</div>
|
228_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeconfig
|
.md
|
This is the configuration class to store the configuration of a [`GPTBigCodeModel`]. It is used to instantiate a
GPTBigCode model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the GPTBigCode
[gpt_bigcode](https://huggingface.co/gpt_bigcode) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
228_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50257):
Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`GPTBigCodeModel`].
n_positions (`int`, *optional*, defaults to 1024):
|
228_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeconfig
|
.md
|
`inputs_ids` passed when calling [`GPTBigCodeModel`].
n_positions (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_embd (`int`, *optional*, defaults to 768):
Dimensionality of the embeddings and hidden states.
n_layer (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
n_head (`int`, *optional*, defaults to 12):
|
228_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
n_head (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
n_inner (`int`, *optional*, defaults to None):
Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd
activation_function (`str`, *optional*, defaults to `"gelu_pytorch_tanh"`):
Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new",
"gelu_pytorch_tanh"]`.
|
228_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeconfig
|
.md
|
Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new",
"gelu_pytorch_tanh"]`.
resid_pdrop (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (`float`, *optional*, defaults to 0.1):
The dropout ratio for the embeddings.
attn_pdrop (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
|
228_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeconfig
|
.md
|
The dropout ratio for the attention.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
The epsilon to use in the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_attn_weights (`bool`, *optional*, defaults to `True`):
Scale attention weights by dividing by sqrt(hidden_size)..
use_cache (`bool`, *optional*, defaults to `True`):
|
228_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeconfig
|
.md
|
Scale attention weights by dividing by sqrt(hidden_size)..
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
attention_softmax_in_fp32 (`bool`, *optional*, defaults to `True`):
Whether to call the fused softmax in float32.
scale_attention_softmax_in_fp32 (`bool`, *optional*, defaults to `True`):
Whether to scale the attention softmax in float32.
attention_type (`bool`, *optional*, defaults to `True`):
|
228_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeconfig
|
.md
|
Whether to scale the attention softmax in float32.
attention_type (`bool`, *optional*, defaults to `True`):
Whether to use Multi-Query Attion (`True`) or Multi-Head Attention (`False`).
Example:
```python
>>> from transformers import GPTBigCodeConfig, GPTBigCodeModel
|
228_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeconfig
|
.md
|
>>> # Initializing a GPTBigCode configuration
>>> configuration = GPTBigCodeConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = GPTBigCodeModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
228_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodemodel
|
.md
|
The bare GPT_BIGCODE Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
228_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodemodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`GPTBigCodeConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
228_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodemodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
228_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeforcausallm
|
.md
|
The GPT_BIGCODE Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
228_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeforcausallm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`GPTBigCodeConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
228_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeforcausallm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
228_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeforsequenceclassification
|
.md
|
The GPTBigCode Model transformer with a sequence classification head on top (linear layer).
[`GPTBigCodeForSequenceClassification`] uses the last token in order to do the classification, as other causal
models (e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
228_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeforsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
228_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeforsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
228_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodeforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`GPTBigCodeConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
228_8_3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.