source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizer
.md
accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. These tokens can be retrieved by calling get_sentinel_tokens method and token ids can be by calling get_sentinel_token_ids method additional_special_tokens (`List[str]`, *optional*): Additional special tokens used by the tokenizer. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
294_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizer
.md
sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results.
294_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizer
.md
- `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. legacy (`bool`, *optional*):
294_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizer
.md
BPE-dropout. legacy (`bool`, *optional*): Whether or not the `legacy` behaviour of the tokenizer should be used. Legacy is before the merge of #24622 and #25224 which includes fixes to properly handle tokens that appear after special tokens. A simple example: - `legacy=True`: ```python >>> from transformers import T5Tokenizer
294_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizer
.md
>>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-base", legacy=True) >>> tokenizer.encode("Hello <extra_id_0>.") [8774, 32099, 3, 5, 1] ``` - `legacy=False`: ```python >>> from transformers import T5Tokenizer
294_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizer
.md
>>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-base", legacy=False) >>> tokenizer.encode("Hello <extra_id_0>.") # the extra space `[3]` is no longer here [8774, 32099, 5, 1] ``` Checkout the [pull request](https://github.com/huggingface/transformers/pull/24565) for more details. add_prefix_space (`bool`, *optional*, defaults to `False`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. Attributes:
294_9_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizer
.md
other word. Attributes: sp_model (`SentencePieceProcessor`): The *SentencePiece* processor that is used for every conversion (string, tokens and IDs). Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
294_9_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizerfast
.md
Construct a "fast" T5 tokenizer (backed by HuggingFace's *tokenizers* library). Based on [Unigram](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=unigram#models). This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
294_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizerfast
.md
Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that contains the vocabulary necessary to instantiate a tokenizer. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> unk_token (`str`, *optional*, defaults to `"<unk>"`):
294_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizerfast
.md
The token used is the `sep_token`. </Tip> unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. extra_ids (`int`, *optional*, defaults to 100):
294_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizerfast
.md
extra_ids (`int`, *optional*, defaults to 100): Add a number of extra ids added to the vocabulary for use as sentinels. These tokens are accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. These tokens can be retrieved by calling get_sentinel_tokens method and token ids can be by calling get_sentinel_token_ids method additional_special_tokens (`List[str]`, *optional*): Additional special tokens used by the tokenizer. add_prefix_space (`bool`, *optional*):
294_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5tokenizerfast
.md
Additional special tokens used by the tokenizer. add_prefix_space (`bool`, *optional*): Whether or not the tokenizer should automatically add a prefix space from_slow (`book`, *optional*, defaults to `False`): Whether or not the tokenizer should be converted from a slow one. If `add_prefix_space` is set, this will be set to `True`. <frameworkcontent> <pt>
294_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5model
.md
The bare T5 Model transformer outputting raw hidden-states without any specific head on top. The T5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a text-to-text denoising generative setting.
294_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5model
.md
text-to-text denoising generative setting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
294_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5model
.md
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`T5Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
294_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5forconditionalgeneration
.md
T5 Model with a `language modeling` head on top. The T5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a text-to-text denoising generative setting.
294_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5forconditionalgeneration
.md
text-to-text denoising generative setting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
294_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5forconditionalgeneration
.md
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`T5Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
294_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5encodermodel
.md
The bare T5 Model transformer outputting encoder's raw hidden-states without any specific head on top. The T5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a text-to-text denoising generative setting.
294_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5encodermodel
.md
text-to-text denoising generative setting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
294_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5encodermodel
.md
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`T5Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
294_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5forsequenceclassification
.md
T5 model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The T5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a text-to-text denoising generative setting.
294_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5forsequenceclassification
.md
text-to-text denoising generative setting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
294_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5forsequenceclassification
.md
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`T5Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
294_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5fortokenclassification
.md
T5 Encoder Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. The T5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a
294_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5fortokenclassification
.md
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a text-to-text denoising generative setting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
294_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5fortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`T5Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
294_15_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5fortokenclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
294_15_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5forquestionanswering
.md
T5 Model with a span classification head on top for extractive question-answering tasks like SQuAD (linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). The T5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
294_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5forquestionanswering
.md
Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a text-to-text denoising generative setting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
294_16_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5forquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`T5Config`]): Model configuration class with all the parameters of the model.
294_16_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#t5forquestionanswering
.md
and behavior. Parameters: config ([`T5Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
294_16_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#tft5model
.md
No docstring available for TFT5Model Methods: call
294_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#tft5forconditionalgeneration
.md
No docstring available for TFT5ForConditionalGeneration Methods: call
294_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#tft5encodermodel
.md
No docstring available for TFT5EncoderModel Methods: call </tf> <jax>
294_19_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#flaxt5model
.md
No docstring available for FlaxT5Model Methods: __call__ - encode - decode
294_20_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#flaxt5forconditionalgeneration
.md
No docstring available for FlaxT5ForConditionalGeneration Methods: __call__ - encode - decode
294_21_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5.md
https://huggingface.co/docs/transformers/en/model_doc/t5/#flaxt5encodermodel
.md
No docstring available for FlaxT5EncoderModel Methods: __call__ </jax> </frameworkcontent>
294_22_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
295_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
295_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#openai-gpt2
.md
<div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=gpt2"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-gpt2-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/gpt2"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div>
295_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#overview
.md
OpenAI GPT-2 model was proposed in [Language Models are Unsupervised Multitask Learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever from [OpenAI](https://huggingface.co/openai). It's a causal (unidirectional) transformer pretrained using language modeling on a very large corpus of ~40 GB of text data. The abstract from the paper is the following:
295_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#overview
.md
The abstract from the paper is the following: *GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks
295_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#overview
.md
text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.* [Write With Transformer](https://transformer.huggingface.co/doc/gpt2-large) is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five
295_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#overview
.md
Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five different sizes: small, medium, large, xl and a distilled version of the small checkpoint: *distilgpt-2*. This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://openai.com/blog/better-language-models/).
295_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#usage-tips
.md
- GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the *run_generation.py* example script.
295_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#usage-tips
.md
observed in the *run_generation.py* example script. - The model can take the *past_key_values* (for PyTorch) or *past* (for TF) as input, which is the previously computed key/value attention pairs. Using this (*past_key_values* or *past*) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see *past_key_values* argument of the [`GPT2Model.forward`] method, or for TF the *past* argument of the
295_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#usage-tips
.md
[`GPT2Model.forward`] method, or for TF the *past* argument of the [`TFGPT2Model.call`] method for more information on its usage. - Enabling the *scale_attn_by_inverse_layer_idx* and *reorder_and_upcast_attn* flags will apply the training stability improvements from [Mistral](https://github.com/stanford-crfm/mistral/) (for PyTorch only).
295_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#usage-example
.md
The `generate()` method can be used to generate text using GPT2 model. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> prompt = "GPT2 is a model developed by OpenAI." >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
295_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#usage-example
.md
>>> prompt = "GPT2 is a model developed by OpenAI." >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] ```
295_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#using-flash-attention-2
.md
Flash Attention 2 is a faster, optimized version of the attention scores computation which relies on `cuda` kernels.
295_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#installation
.md
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer).
295_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#installation
.md
Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ```
295_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#usage
.md
To load a model using Flash Attention 2, we can pass the argument `attn_implementation="flash_attention_2"` to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). We'll also load the model in half-precision (e.g. `torch.float16`), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference: ```python >>> import torch
295_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#usage
.md
```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> device = "cuda" # the device to load the model onto
295_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#usage
.md
>>> model = AutoModelForCausalLM.from_pretrained("gpt2", torch_dtype=torch.float16, attn_implementation="flash_attention_2") >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> prompt = "def hello_world():" >>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device) >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] ```
295_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#expected-speedups
.md
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `gpt2` checkpoint and the Flash Attention 2 version of the model using a sequence length of 512. <div style="text-align: center"> <img src="https://huggingface.co/datasets/EduardoPacheco/documentation-images/resolve/main/gpt2_flash_attention_2_speedup.jpg"> </div>
295_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
295_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("gpt2", torch_dtype=torch.float16, attn_implementation="sdpa") ... ```
295_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#using-scaled-dot-product-attention-sdpa
.md
model = AutoModelForCausalLM.from_pretrained("gpt2", torch_dtype=torch.float16, attn_implementation="sdpa") ... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (rtx3080ti-16GB, PyTorch 2.2.1, OS Ubuntu 22.04) using `float16` with [gpt2-large](https://huggingface.co/openai-community/gpt2-large), we saw the following speedups during training and inference.
295_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#training
.md
| Batch size | Seq len | Time per batch (Eager - s) | Time per batch (SDPA - s) | Speedup (%) | Eager peak mem (MB) | SDPA peak mem (MB) | Mem saving (%) | |-----------:|--------:|----------------------------:|--------------------------:|------------:|--------------------:|-------------------:|------------------:| | 1 | 128 | 0.039 | 0.032 | 23.042 | 3482.32 | 3494.62 | -0.352 |
295_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#training
.md
| 1 | 256 | 0.073 | 0.059 | 25.15 | 3546.66 | 3552.6 | -0.167 | | 1 | 512 | 0.155 | 0.118 | 30.96 | 4230.1 | 3665.59 | 15.4 | | 1 | 1024 | 0.316 | 0.209 | 50.839 | 8682.26 | 4881.09 | 77.875 |
295_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#training
.md
| 2 | 128 | 0.07 | 0.06 | 15.324 | 3557.8 | 3545.91 | 0.335 | | 2 | 256 | 0.143 | 0.122 | 16.53 | 3901.5 | 3657.68 | 6.666 | | 2 | 512 | 0.267 | 0.213 | 25.626 | 7062.21 | 4876.47 | 44.822 |
295_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#training
.md
| 2 | 1024 | OOM | 0.404 | / | OOM | 8096.35 | SDPA does not OOM | | 4 | 128 | 0.134 | 0.128 | 4.412 | 3675.79 | 3648.72 | 0.742 | | 4 | 256 | 0.243 | 0.217 | 12.292 | 6129.76 | 4871.12 | 25.839 |
295_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#training
.md
| 4 | 512 | 0.494 | 0.406 | 21.687 | 12466.6 | 8102.64 | 53.858 | | 4 | 1024 | OOM | 0.795 | / | OOM | 14568.2 | SDPA does not OOM |
295_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#inference
.md
| Batch size | Seq len | Per token latency Eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem Eager (MB) | Mem SDPA (MB) | Mem saved (%) | |-----------:|--------:|-----------------------------:|----------------------------:|------------:|---------------:|--------------:|--------------:| | 1 | 128 | 7.991 | 6.968 | 14.681 | 1685.2 | 1701.32 | -0.947 |
295_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#inference
.md
| 1 | 256 | 8.462 | 7.199 | 17.536 | 1745.49 | 1770.78 | -1.428 | | 1 | 512 | 8.68 | 7.853 | 10.529 | 1907.69 | 1921.29 | -0.708 | | 1 | 768 | 9.101 | 8.365 | 8.791 | 2032.93 | 2068.12 | -1.701 |
295_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#inference
.md
| 2 | 128 | 9.169 | 9.001 | 1.861 | 1803.84 | 1811.4 | -0.418 | | 2 | 256 | 9.907 | 9.78 | 1.294 | 1907.72 | 1921.44 | -0.714 | | 2 | 512 | 11.519 | 11.644 | -1.071 | 2176.86 | 2197.75 | -0.951 |
295_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#inference
.md
| 2 | 768 | 13.022 | 13.407 | -2.873 | 2464.3 | 2491.06 | -1.074 | | 4 | 128 | 10.097 | 9.831 | 2.709 | 1942.25 | 1985.13 | -2.16 | | 4 | 256 | 11.599 | 11.398 | 1.764 | 2177.28 | 2197.86 | -0.937 |
295_11_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#inference
.md
| 4 | 512 | 14.653 | 14.45 | 1.411 | 2753.16 | 2772.57 | -0.7 | | 4 | 768 | 17.846 | 17.617 | 1.299 | 3327.04 | 3343.97 | -0.506 |
295_11_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-generation"/>
295_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#resources
.md
<PipelineTag pipeline="text-generation"/> - A blog on how to [Finetune a non-English GPT-2 Model with Hugging Face](https://www.philschmid.de/fine-tune-a-non-english-gpt-2-model-with-huggingface). - A blog on [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate) with GPT-2. - A blog on [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot), a large GPT-2 model.
295_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#resources
.md
- A blog on [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot), a large GPT-2 model. - A blog on [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate) with GPT-2. - A blog on [How to train a Language Model with Megatron-LM](https://huggingface.co/blog/megatron-training) with a GPT-2 model.
295_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#resources
.md
- A blog on [How to train a Language Model with Megatron-LM](https://huggingface.co/blog/megatron-training) with a GPT-2 model. - A notebook on how to [finetune GPT2 to generate lyrics in the style of your favorite artist](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb). 🌎
295_12_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#resources
.md
- A notebook on how to [finetune GPT2 to generate tweets in the style of your favorite Twitter user](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb). 🌎 - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the πŸ€— Hugging Face Course.
295_12_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#resources
.md
- [`GPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation), and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
295_12_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#resources
.md
- [`TFGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
295_12_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#resources
.md
- [`FlaxGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb). - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification)
295_12_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#resources
.md
- [Token classification task guide](../tasks/token_classification) - [Causal language modeling task guide](../tasks/language_modeling)
295_12_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
This is the configuration class to store the configuration of a [`GPT2Model`] or a [`TFGPT2Model`]. It is used to instantiate a GPT-2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPT-2 [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
295_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50257): Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`GPT2Model`] or [`TFGPT2Model`]. n_positions (`int`, *optional*, defaults to 1024):
295_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
`inputs_ids` passed when calling [`GPT2Model`] or [`TFGPT2Model`]. n_positions (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_embd (`int`, *optional*, defaults to 768): Dimensionality of the embeddings and hidden states. n_layer (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. n_head (`int`, *optional*, defaults to 12):
295_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
Number of hidden layers in the Transformer encoder. n_head (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. n_inner (`int`, *optional*): Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd activation_function (`str`, *optional*, defaults to `"gelu_new"`): Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`. resid_pdrop (`float`, *optional*, defaults to 0.1):
295_13_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
resid_pdrop (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (`float`, *optional*, defaults to 0.1): The dropout ratio for the embeddings. attn_pdrop (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention. layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): The epsilon to use in the layer normalization layers. initializer_range (`float`, *optional*, defaults to 0.02):
295_13_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
The epsilon to use in the layer normalization layers. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. summary_type (`string`, *optional*, defaults to `"cls_index"`): Argument used when doing sequence summary, used in the models [`GPT2DoubleHeadsModel`] and [`TFGPT2DoubleHeadsModel`]. Has to be one of the following options: - `"last"`: Take the last token hidden state (like XLNet).
295_13_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
Has to be one of the following options: - `"last"`: Take the last token hidden state (like XLNet). - `"first"`: Take the first token hidden state (like BERT). - `"mean"`: Take the mean of all tokens hidden states. - `"cls_index"`: Supply a Tensor of classification token position (like GPT/GPT-2). - `"attn"`: Not implemented now, use multi-head attention. summary_use_proj (`bool`, *optional*, defaults to `True`): Argument used when doing sequence summary, used in the models [`GPT2DoubleHeadsModel`] and
295_13_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
Argument used when doing sequence summary, used in the models [`GPT2DoubleHeadsModel`] and [`TFGPT2DoubleHeadsModel`]. Whether or not to add a projection after the vector extraction. summary_activation (`str`, *optional*): Argument used when doing sequence summary. Used in for the multiple choice head in [`GPT2DoubleHeadsModel`]. Pass `"tanh"` for a tanh activation to the output, any other value will result in no activation. summary_proj_to_labels (`bool`, *optional*, defaults to `True`):
295_13_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
summary_proj_to_labels (`bool`, *optional*, defaults to `True`): Argument used when doing sequence summary, used in the models [`GPT2DoubleHeadsModel`] and [`TFGPT2DoubleHeadsModel`]. Whether the projection outputs should have `config.num_labels` or `config.hidden_size` classes. summary_first_dropout (`float`, *optional*, defaults to 0.1): Argument used when doing sequence summary, used in the models [`GPT2DoubleHeadsModel`] and [`TFGPT2DoubleHeadsModel`].
295_13_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
Argument used when doing sequence summary, used in the models [`GPT2DoubleHeadsModel`] and [`TFGPT2DoubleHeadsModel`]. The dropout ratio to be used after the projection and activation. scale_attn_weights (`bool`, *optional*, defaults to `True`): Scale attention weights by dividing by sqrt(hidden_size).. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). bos_token_id (`int`, *optional*, defaults to 50256):
295_13_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
bos_token_id (`int`, *optional*, defaults to 50256): Id of the beginning of sentence token in the vocabulary. eos_token_id (`int`, *optional*, defaults to 50256): Id of the end of sentence token in the vocabulary. scale_attn_by_inverse_layer_idx (`bool`, *optional*, defaults to `False`): Whether to additionally scale attention weights by `1 / layer_idx + 1`. reorder_and_upcast_attn (`bool`, *optional*, defaults to `False`):
295_13_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
reorder_and_upcast_attn (`bool`, *optional*, defaults to `False`): Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention dot-product/softmax to float() when training with mixed precision. Example: ```python >>> from transformers import GPT2Config, GPT2Model
295_13_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2config
.md
>>> # Initializing a GPT2 configuration >>> configuration = GPT2Config() >>> # Initializing a model (with random weights) from the configuration >>> model = GPT2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
295_13_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2tokenizer
.md
Construct a GPT-2 tokenizer. Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: ```python >>> from transformers import GPT2Tokenizer >>> tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2") >>> tokenizer("Hello world")["input_ids"] [15496, 995]
295_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2tokenizer
.md
>>> tokenizer(" Hello world")["input_ids"] [18435, 995] ``` You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. <Tip> When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). </Tip>
295_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2tokenizer
.md
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). </Tip> This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See
295_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2tokenizer
.md
errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The beginning of sequence token.
295_14_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2tokenizer
.md
token instead. bos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The beginning of sequence token. eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The end of sequence token. pad_token (`str`, *optional*): The token used for padding, for example when batching sequences of different lengths. add_prefix_space (`bool`, *optional*, defaults to `False`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any
295_14_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2tokenizer
.md
Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (GPT2 tokenizer detect beginning of words by the preceding space). add_bos_token (`bool`, *optional*, defaults to `False`): Whether or not to add an initial beginning of sentence token to the input. This allows to treat the leading word just as any other word. Methods: save_vocabulary
295_14_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2tokenizerfast
.md
Construct a "fast" GPT-2 tokenizer (backed by HuggingFace's *tokenizers* library). Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: ```python >>> from transformers import GPT2TokenizerFast
295_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt2.md
https://huggingface.co/docs/transformers/en/model_doc/gpt2/#gpt2tokenizerfast
.md
>>> tokenizer = GPT2TokenizerFast.from_pretrained("openai-community/gpt2") >>> tokenizer("Hello world")["input_ids"] [15496, 995]
295_15_1