source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmtokenizer
.md
"__classify__") to a vocabulary. - The `lang2id` attribute maps the languages supported by the model with their IDs if provided (automatically set for pretrained vocabularies). - The `id2lang` attributes does reverse mapping if provided (automatically set for pretrained vocabularies). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Vocabulary file.
416_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmtokenizer
.md
this superclass for more information regarding those methods. Args: vocab_file (`str`): Vocabulary file. merges_file (`str`): Merges file. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip>
416_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmtokenizer
.md
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
416_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmtokenizer
.md
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"</s>"`):
416_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmtokenizer
.md
cls_token (`str`, *optional*, defaults to `"</s>"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"<special1>"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
416_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmtokenizer
.md
modeling. This is the token which the model will try to predict. additional_special_tokens (`List[str]`, *optional*, defaults to `['<special0>', '<special1>', '<special2>', '<special3>', '<special4>', '<special5>', '<special6>', '<special7>', '<special8>', '<special9>']`): List of additional special tokens. lang2id (`Dict[str, int]`, *optional*): Dictionary mapping languages string identifiers to their IDs. id2lang (`Dict[int, str]`, *optional*): Dictionary mapping language IDs to their string identifiers.
416_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmtokenizer
.md
id2lang (`Dict[int, str]`, *optional*): Dictionary mapping language IDs to their string identifiers. do_lowercase_and_remove_accent (`bool`, *optional*, defaults to `True`): Whether to lowercase and remove accents when tokenizing. Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
416_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlm-specific-outputs
.md
models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput Base class for outputs of question answering models using a `SquadHead`. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned if both `start_positions` and `end_positions` are provided): Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses.
416_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlm-specific-outputs
.md
Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses. start_top_log_probs (`torch.FloatTensor` of shape `(batch_size, config.start_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided): Log probabilities for the top config.start_n_top start token possibilities (beam-search).
416_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlm-specific-outputs
.md
Log probabilities for the top config.start_n_top start token possibilities (beam-search). start_top_index (`torch.LongTensor` of shape `(batch_size, config.start_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided): Indices for the top config.start_n_top start token possibilities (beam-search). end_top_log_probs (`torch.FloatTensor` of shape `(batch_size, config.start_n_top * config.end_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided):
416_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlm-specific-outputs
.md
Log probabilities for the top `config.start_n_top * config.end_n_top` end token possibilities (beam-search). end_top_index (`torch.LongTensor` of shape `(batch_size, config.start_n_top * config.end_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided): Indices for the top `config.start_n_top * config.end_n_top` end token possibilities (beam-search).
416_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlm-specific-outputs
.md
Indices for the top `config.start_n_top * config.end_n_top` end token possibilities (beam-search). cls_logits (`torch.FloatTensor` of shape `(batch_size,)`, *optional*, returned if `start_positions` or `end_positions` is not provided): Log probabilities for the `is_impossible` label of the answers. hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
416_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlm-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
416_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlm-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. <frameworkcontent> <pt>
416_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmmodel
.md
The bare XLM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
416_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
416_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
416_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmwithlmheadmodel
.md
The XLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
416_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmwithlmheadmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
416_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmwithlmheadmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
416_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmforsequenceclassification
.md
XLM Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
416_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
416_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmforsequenceclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
416_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmformultiplechoice
.md
XLM Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
416_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmformultiplechoice
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
416_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmformultiplechoice
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
416_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmfortokenclassification
.md
XLM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
416_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmfortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
416_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmfortokenclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
416_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmforquestionansweringsimple
.md
XLM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
416_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmforquestionansweringsimple
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMConfig`]): Model configuration class with all the parameters of the model.
416_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmforquestionansweringsimple
.md
and behavior. Parameters: config ([`XLMConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
416_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmforquestionanswering
.md
XLM Model with a beam-search span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
416_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMConfig`]): Model configuration class with all the parameters of the model.
416_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmforquestionanswering
.md
and behavior. Parameters: config ([`XLMConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
416_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#tfxlmmodel
.md
No docstring available for TFXLMModel Methods: call
416_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#tfxlmwithlmheadmodel
.md
No docstring available for TFXLMWithLMHeadModel Methods: call
416_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#tfxlmforsequenceclassification
.md
No docstring available for TFXLMForSequenceClassification Methods: call
416_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#tfxlmformultiplechoice
.md
No docstring available for TFXLMForMultipleChoice Methods: call
416_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#tfxlmfortokenclassification
.md
No docstring available for TFXLMForTokenClassification Methods: call
416_19_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md
https://huggingface.co/docs/transformers/en/model_doc/xlm/#tfxlmforquestionansweringsimple
.md
No docstring available for TFXLMForQuestionAnsweringSimple Methods: call </tf> </frameworkcontent>
416_20_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
417_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
417_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#overview
.md
The LongT5 model was proposed in [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung and Yinfei Yang. It's an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting. LongT5 model is an extension of T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention.
417_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#overview
.md
Transient-Global attention. The abstract from the paper is the following: *Recent work has shown that either (1) increasing the input length or (2) increasing model size can improve the performance of Transformer-based neural models. In this paper, we present a new model, called LongT5, with which we explore the effects of scaling both the input length and model size at the same time. Specifically, we integrated
417_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#overview
.md
explore the effects of scaling both the input length and model size at the same time. Specifically, we integrated attention ideas from long-input transformers (ETC), and adopted pre-training strategies from summarization pre-training (PEGASUS) into the scalable T5 architecture. The result is a new attention mechanism we call {\em Transient Global} (TGlobal), which mimics ETC's local/global attention mechanism, but without requiring additional side-inputs. We are
417_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#overview
.md
(TGlobal), which mimics ETC's local/global attention mechanism, but without requiring additional side-inputs. We are able to achieve state-of-the-art results on several summarization tasks and outperform the original T5 models on question answering tasks.* This model was contributed by [stancld](https://huggingface.co/stancld). The original code can be found [here](https://github.com/google-research/longt5).
417_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#usage-tips
.md
- [`LongT5ForConditionalGeneration`] is an extension of [`T5ForConditionalGeneration`] exchanging the traditional encoder *self-attention* layer with efficient either *local* attention or *transient-global* (*tglobal*) attention. - Unlike the T5 model, LongT5 does not use a task prefix. Furthermore, it uses a different pre-training objective inspired by the pre-training of [`PegasusForConditionalGeneration`].
417_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#usage-tips
.md
inspired by the pre-training of [`PegasusForConditionalGeneration`]. - LongT5 model is designed to work efficiently and very well on long-range *sequence-to-sequence* tasks where the input sequence exceeds commonly used 512 tokens. It is capable of handling input sequences of a length up to 16,384 tokens. - For *Local Attention*, the sparse sliding-window local attention operation allows a given token to attend only `r`
417_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#usage-tips
.md
- For *Local Attention*, the sparse sliding-window local attention operation allows a given token to attend only `r` tokens to the left and right of it (with `r=127` by default). *Local Attention* does not introduce any new parameters to the model. The complexity of the mechanism is linear in input sequence length `l`: `O(l*r)`. - *Transient Global Attention* is an extension of the *Local Attention*. It, furthermore, allows each input token to
417_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#usage-tips
.md
- *Transient Global Attention* is an extension of the *Local Attention*. It, furthermore, allows each input token to interact with all other tokens in the layer. This is achieved via splitting an input sequence into blocks of a fixed length `k` (with a default `k=16`). Then, a global token for such a block is obtained via summing and normalizing the embeddings of every token in the block. Thanks to this, the attention allows each token to attend to both nearby tokens like in Local attention, and
417_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#usage-tips
.md
in the block. Thanks to this, the attention allows each token to attend to both nearby tokens like in Local attention, and also every global token like in the case of standard global attention (*transient* represents the fact the global tokens are constructed dynamically within each attention operation). As a consequence, *TGlobal* attention introduces a few new parameters -- global relative position biases and a layer normalization for global token's embedding.
417_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#usage-tips
.md
a few new parameters -- global relative position biases and a layer normalization for global token's embedding. The complexity of this mechanism is `O(l(r + l/k))`. - An example showing how to evaluate a fine-tuned LongT5 model on the [pubmed dataset](https://huggingface.co/datasets/scientific_papers) is below. ```python >>> import evaluate >>> from datasets import load_dataset >>> from transformers import AutoTokenizer, LongT5ForConditionalGeneration
417_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#usage-tips
.md
>>> dataset = load_dataset("scientific_papers", "pubmed", split="validation") >>> model = ( ... LongT5ForConditionalGeneration.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps") ... .to("cuda") ... .half() ... ) >>> tokenizer = AutoTokenizer.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
417_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#usage-tips
.md
>>> def generate_answers(batch): ... inputs_dict = tokenizer( ... batch["article"], max_length=16384, padding="max_length", truncation=True, return_tensors="pt" ... ) ... input_ids = inputs_dict.input_ids.to("cuda") ... attention_mask = inputs_dict.attention_mask.to("cuda") ... output_ids = model.generate(input_ids, attention_mask=attention_mask, max_length=512, num_beams=2) ... batch["predicted_abstract"] = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
417_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#usage-tips
.md
... batch["predicted_abstract"] = tokenizer.batch_decode(output_ids, skip_special_tokens=True) ... return batch
417_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#usage-tips
.md
>>> result = dataset.map(generate_answer, batched=True, batch_size=2) >>> rouge = evaluate.load("rouge") >>> rouge.compute(predictions=result["predicted_abstract"], references=result["abstract"]) ```
417_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#resources
.md
- [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization)
417_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5config
.md
This is the configuration class to store the configuration of a [`LongT5Model`] or a [`FlaxLongT5Model`]. It is used to instantiate a LongT5 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the LongT5 [google/long-t5-local-base](https://huggingface.co/google/long-t5-local-base) architecture.
417_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5config
.md
[google/long-t5-local-base](https://huggingface.co/google/long-t5-local-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Arguments: vocab_size (`int`, *optional*, defaults to 32128): Vocabulary size of the LongT5 model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`LongT5Model`].
417_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5config
.md
`inputs_ids` passed when calling [`LongT5Model`]. d_model (`int`, *optional*, defaults to 512): Size of the encoder layers and the pooler layer. d_kv (`int`, *optional*, defaults to 64): Size of the key, query, value projections per attention head. `d_kv` has to be equal to `d_model // num_heads`. d_ff (`int`, *optional*, defaults to 2048): Size of the intermediate feed forward layer in each `LongT5Block`. num_layers (`int`, *optional*, defaults to 6): Number of hidden layers in the Transformer encoder.
417_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5config
.md
num_layers (`int`, *optional*, defaults to 6): Number of hidden layers in the Transformer encoder. num_decoder_layers (`int`, *optional*): Number of hidden layers in the Transformer decoder. Will use the same value as `num_layers` if not set. num_heads (`int`, *optional*, defaults to 8): Number of attention heads for each attention layer in the Transformer encoder. local_radius (`int`, *optional*, defaults to 127)
417_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5config
.md
local_radius (`int`, *optional*, defaults to 127) Number of tokens to the left/right for each token to locally self-attend in a local attention mechanism. global_block_size (`int`, *optional*, defaults to 16) Lenght of blocks an input sequence is divided into for a global token representation. Used only for `encoder_attention_type = "transient-global"`. relative_attention_num_buckets (`int`, *optional*, defaults to 32): The number of buckets to use for each attention layer.
417_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5config
.md
relative_attention_num_buckets (`int`, *optional*, defaults to 32): The number of buckets to use for each attention layer. relative_attention_max_distance (`int`, *optional*, defaults to 128): The maximum distance of the longer sequences for the bucket separation. dropout_rate (`float`, *optional*, defaults to 0.1): The ratio for all dropout layers. layer_norm_eps (`float`, *optional*, defaults to 1e-6): The epsilon used by the layer normalization layers.
417_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5config
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-6): The epsilon used by the layer normalization layers. initializer_factor (`float`, *optional*, defaults to 1): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). feed_forward_proj (`string`, *optional*, defaults to `"relu"`): Type of feed forward layer to be used. Should be one of `"relu"` or `"gated-gelu"`. LongT5v1.1 uses the
417_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5config
.md
Type of feed forward layer to be used. Should be one of `"relu"` or `"gated-gelu"`. LongT5v1.1 uses the `"gated-gelu"` feed forward projection. Original LongT5 implementation uses `"gated-gelu"`. encoder_attention_type (`string`, *optional*, defaults to `"local"`): Type of encoder attention to be used. Should be one of `"local"` or `"transient-global"`, which are supported by LongT5 implementation. use_cache (`bool`, *optional*, defaults to `True`):
417_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5config
.md
supported by LongT5 implementation. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). <frameworkcontent> <pt>
417_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5model
.md
The bare LONGT5 Model transformer outputting raw hidden-states without any specific head on top. The LongT5 model was proposed in [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung and Yinfei Yang. It's an encoder-decoder transformer pre-trained in a text-to-text denoising
417_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5model
.md
Ni, Yun-Hsuan Sung and Yinfei Yang. It's an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting. LongT5 model is an extension of T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
417_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5model
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
417_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5model
.md
and behavior. Parameters: config ([`LongT5Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
417_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5forconditionalgeneration
.md
LONGT5 Model with a `language modeling` head on top. The LongT5 model was proposed in [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung and Yinfei Yang. It's an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting. LongT5 model is an extension of T5 model, and it enables using one of the two different
417_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5forconditionalgeneration
.md
generative setting. LongT5 model is an extension of T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
417_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5forconditionalgeneration
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`LongT5Config`]): Model configuration class with all the parameters of the model.
417_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5forconditionalgeneration
.md
and behavior. Parameters: config ([`LongT5Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
417_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5encodermodel
.md
The bare LONGT5 Model transformer outputting encoder's raw hidden-states without any specific head on top. The LongT5 model was proposed in [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung and Yinfei Yang. It's an encoder-decoder transformer pre-trained in a text-to-text denoising
417_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5encodermodel
.md
Ni, Yun-Hsuan Sung and Yinfei Yang. It's an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting. LongT5 model is an extension of T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
417_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5encodermodel
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
417_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#longt5encodermodel
.md
and behavior. Parameters: config ([`LongT5Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <jax>
417_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#flaxlongt5model
.md
No docstring available for FlaxLongT5Model Methods: __call__ - encode - decode
417_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longt5.md
https://huggingface.co/docs/transformers/en/model_doc/longt5/#flaxlongt5forconditionalgeneration
.md
No docstring available for FlaxLongT5ForConditionalGeneration Methods: __call__ - encode - decode </jax> </frameworkcontent>
417_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpm.md
https://huggingface.co/docs/transformers/en/model_doc/cpm/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
418_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpm.md
https://huggingface.co/docs/transformers/en/model_doc/cpm/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
418_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpm.md
https://huggingface.co/docs/transformers/en/model_doc/cpm/#overview
.md
The CPM model was proposed in [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. The abstract from the paper is the following:
418_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpm.md
https://huggingface.co/docs/transformers/en/model_doc/cpm/#overview
.md
The abstract from the paper is the following: *Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3, with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus
418_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpm.md
https://huggingface.co/docs/transformers/en/model_doc/cpm/#overview
.md
zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained
418_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpm.md
https://huggingface.co/docs/transformers/en/model_doc/cpm/#overview
.md
of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many NLP tasks in the settings of few-shot (even zero-shot) learning.*
418_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpm.md
https://huggingface.co/docs/transformers/en/model_doc/cpm/#overview
.md
NLP tasks in the settings of few-shot (even zero-shot) learning.* This model was contributed by [canwenxu](https://huggingface.co/canwenxu). The original implementation can be found here: https://github.com/TsinghuaAI/CPM-Generate <Tip> CPM's architecture is the same as GPT-2, except for tokenization method. Refer to [GPT-2 documentation](gpt2) for API reference information. </Tip>
418_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpm.md
https://huggingface.co/docs/transformers/en/model_doc/cpm/#cpmtokenizer
.md
Runs pre-tokenization with Jieba segmentation tool. It is used in CPM models.
418_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpm.md
https://huggingface.co/docs/transformers/en/model_doc/cpm/#cpmtokenizerfast
.md
Runs pre-tokenization with Jieba segmentation tool. It is used in CPM models.
418_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md
https://huggingface.co/docs/transformers/en/model_doc/patchtst/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
419_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md
https://huggingface.co/docs/transformers/en/model_doc/patchtst/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
419_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md
https://huggingface.co/docs/transformers/en/model_doc/patchtst/#overview
.md
The PatchTST model was proposed in [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/abs/2211.14730) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong and Jayant Kalagnanam. At a high level the model vectorizes time series into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head. The model is illustrated in the following figure:
419_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md
https://huggingface.co/docs/transformers/en/model_doc/patchtst/#overview
.md
![model](https://github.com/namctin/transformers/assets/8100/150af169-29de-419a-8d98-eb78251c21fa) The abstract from the paper is the following:
419_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md
https://huggingface.co/docs/transformers/en/model_doc/patchtst/#overview
.md
*We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local
419_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md
https://huggingface.co/docs/transformers/en/model_doc/patchtst/#overview
.md
the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We
419_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md
https://huggingface.co/docs/transformers/en/model_doc/patchtst/#overview
.md
can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-trained representation on one dataset to others also produces SOTA forecasting accuracy.*
419_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md
https://huggingface.co/docs/transformers/en/model_doc/patchtst/#overview
.md
This model was contributed by [namctin](https://huggingface.co/namctin), [gsinthong](https://huggingface.co/gsinthong), [diepi](https://huggingface.co/diepi), [vijaye12](https://huggingface.co/vijaye12), [wmgifford](https://huggingface.co/wmgifford), and [kashif](https://huggingface.co/kashif). The original code can be found [here](https://github.com/yuqinie98/PatchTST).
419_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md
https://huggingface.co/docs/transformers/en/model_doc/patchtst/#usage-tips
.md
The model can also be used for time series classification and time series regression. See the respective [`PatchTSTForClassification`] and [`PatchTSTForRegression`] classes.
419_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md
https://huggingface.co/docs/transformers/en/model_doc/patchtst/#resources
.md
- A blog post explaining PatchTST in depth can be found [here](https://huggingface.co/blog/patchtst). The blog can also be opened in Google Colab.
419_3_0