source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder. attn_drop_rate (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. drop_path_rate (`float`, *optional*, defaults to 0.0): The stochastic depth rate. output_a3_attentions (`bool`, *optional*, defaults to `False`): Whether or not the model should returns A^3 module attentions. initializer_range (`float`, *optional*, defaults to 0.02):
237_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrconfig
.md
Whether or not the model should returns A^3 module attentions. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Example: ```python >>> from transformers import MgpstrConfig, MgpstrForSceneTextRecognition
237_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrconfig
.md
>>> # Initializing a Mgpstr mgp-str-base style configuration >>> configuration = MgpstrConfig() >>> # Initializing a model (with random weights) from the mgp-str-base style configuration >>> model = MgpstrForSceneTextRecognition(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
237_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrtokenizer
.md
Construct a MGP-STR char tokenizer. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. unk_token (`str`, *optional*, defaults to `"[GO]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `"[GO]"`):
237_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrtokenizer
.md
token instead. bos_token (`str`, *optional*, defaults to `"[GO]"`): The beginning of sequence token. eos_token (`str`, *optional*, defaults to `"[s]"`): The end of sequence token. pad_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"[GO]"`): A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Methods: save_vocabulary
237_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrprocessor
.md
Constructs a MGP-STR processor which wraps an image processor and MGP-STR tokenizers into a single [`MgpstrProcessor`] offers all the functionalities of `ViTImageProcessor`] and [`MgpstrTokenizer`]. See the [`~MgpstrProcessor.__call__`] and [`~MgpstrProcessor.batch_decode`] for more information. Args: image_processor (`ViTImageProcessor`, *optional*): An instance of `ViTImageProcessor`. The image processor is a required input. tokenizer ([`MgpstrTokenizer`], *optional*):
237_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrprocessor
.md
An instance of `ViTImageProcessor`. The image processor is a required input. tokenizer ([`MgpstrTokenizer`], *optional*): The tokenizer is a required input. Methods: __call__ - batch_decode
237_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrmodel
.md
The bare MGP-STR Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MgpstrConfig`]): Model configuration class with all the parameters of the model.
237_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrmodel
.md
behavior. Parameters: config ([`MgpstrConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
237_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrforscenetextrecognition
.md
MGP-STR Model transformer with three classification heads on top (three A^3 modules and three linear layer on top of the transformer encoder output) for scene text recognition (STR) . This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
237_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrforscenetextrecognition
.md
behavior. Parameters: config ([`MgpstrConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
237_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
238_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
238_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#mega
.md
<Tip warning={true}> This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2. You can do so by running the following command: `pip install -U transformers==4.40.2`. </Tip>
238_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#overview
.md
The MEGA model was proposed in [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. MEGA proposes a new approach to self-attention with each encoder layer having a multi-headed exponential moving average in addition to a single head of standard dot-product attention, giving the attention mechanism
238_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#overview
.md
stronger positional biases. This allows MEGA to perform competitively to Transformers on standard benchmarks including LRA while also having significantly fewer parameters. MEGA's compute efficiency allows it to scale to very long sequences, making it an attractive option for long-document NLP tasks. The abstract from the paper is the following:
238_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#overview
.md
*The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences. In this paper, we introduce Mega, a simple, theoretically grounded, single-head gated attention mechanism equipped with (exponential) moving average to incorporate inductive bias of position-aware local dependencies into the position-agnostic attention mechanism. We further propose a variant of Mega that offers linear
238_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#overview
.md
local dependencies into the position-agnostic attention mechanism. We further propose a variant of Mega that offers linear time and space complexity yet yields only minimal quality loss, by efficiently splitting the whole sequence into multiple chunks with fixed length. Extensive experiments on a wide range of sequence modeling benchmarks, including the Long Range Arena, neural machine translation, auto-regressive language modeling, and image and speech classification, show that Mega achieves significant
238_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#overview
.md
translation, auto-regressive language modeling, and image and speech classification, show that Mega achieves significant improvements over other sequence models, including variants of Transformers and recent state space models. *
238_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#overview
.md
This model was contributed by [mnaylor](https://huggingface.co/mnaylor). The original code can be found [here](https://github.com/facebookresearch/mega).
238_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#usage-tips
.md
- MEGA can perform quite well with relatively few parameters. See Appendix D in the MEGA paper for examples of architectural specs which perform well in various settings. If using MEGA as a decoder, be sure to set `bidirectional=False` to avoid errors with default bidirectional. - Mega-chunk is a variant of mega that reduces time and spaces complexity from quadratic to linear. Utilize chunking with MegaConfig.use_chunking and control chunk size with MegaConfig.chunk_size
238_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#implementation-notes
.md
- The original implementation of MEGA had an inconsistent expectation of attention masks for padding and causal self-attention between the softmax attention and Laplace/squared ReLU method. This implementation addresses that inconsistency. - The original implementation did not include token type embeddings; this implementation adds support for these, with the option controlled by MegaConfig.add_token_type_embeddings
238_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
This is the configuration class to store the configuration of a [`MegaModel`]. It is used to instantiate a Mega model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Mega [mnaylor/mega-base-wikitext](https://huggingface.co/mnaylor/mega-base-wikitext) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
238_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the Mega model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MegaModel`]. hidden_size (`int`, *optional*, defaults to 128): Dimensionality of the encoder layers and the pooler layer.
238_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
hidden_size (`int`, *optional*, defaults to 128): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 4): Number of hidden layers in the Mega encoder. intermediate_size (`int`, *optional*, defaults to 256): Dimensionality of the hidden size (self-attention value projection) within the Mega encoder ema_projection_size (`int`, *optional*, defaults to 16): Dimensionality of the MegaMultiDimensionDampedEma
238_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
ema_projection_size (`int`, *optional*, defaults to 16): Dimensionality of the MegaMultiDimensionDampedEma bidirectional (`bool`, *optional*, defaults to `True`): Whether the MegaMultiDimensionDampedEma used in Mega's self-attention should work bidirectionally (`True`) or unidirectionally (`False`). Bidirectional EMA is incompatible with causal decoding, so this should be False if you intend to use the model as a decoder. shared_representation_size (`int`, *optional*, defaults to 64):
238_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
False if you intend to use the model as a decoder. shared_representation_size (`int`, *optional*, defaults to 64): Dimensionality of the linear projection for shared representation of self-attention queries and keys use_chunking (`bool`, *optional*, defaults to `False`): Whether to chunk inputs for linear self-attention complexity (described as Mega-chunk in the paper) chunk_size (`int`, *optional*, defaults to -1):
238_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
chunk_size (`int`, *optional*, defaults to -1): If `use_chunking` is set to `True`, determines the size of the chunks to apply to the input sequence. If chunking is used, input sequences must be padded to a multiple of `chunk_size` truncation (`int`, *optional*): If specified, the sequence length for which to truncate MegaMultiDimensionDampedEma normalize_before_mega (`bool`, *optional*, defaults to `True`): Whether to normalize before (`True`) or after (`False`) passing through Mega encoder blocks
238_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
Whether to normalize before (`True`) or after (`False`) passing through Mega encoder blocks normalization_type (`str`, *optional*, defaults to `"scalenorm"`): Type of normalization to use in Mega encoder blocks. Choose one of `"scalenorm"`, `"layernorm"`, `"rmsnorm"`, `"batchnorm"`, or `"syncbatchnorm"` (GPU required for syncbatchnorm) norm_affine (`bool`, *optional*, defaults to `True`): If `True`, applies a parameterized affine transformation to inputs during normalization
238_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
If `True`, applies a parameterized affine transformation to inputs during normalization activation (`str`, *optional*, defaults to `"silu"`): Activation function to apply within Mega encoder blocks. Choose one of `"silu"`, `"relu"`, `"linear"`, `"gelu"`, or `"gelu_accurate"` attention_activation (`str`, *optional*, defaults to `"softmax"`): Activation function to apply for single-headed self-attention (a la Transformer). Choose one of `"softmax"`, `"laplace"`, or `"relu2"`
238_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
`"softmax"`, `"laplace"`, or `"relu2"` dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for EMA self-attention hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. use_feature_dropout (`bool`, *optional*, defaults to `False`):
238_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
The dropout ratio for the attention probabilities. use_feature_dropout (`bool`, *optional*, defaults to `False`): Whether to use feature-based (`True`) or standard dropout (`False`) use_normalized_ffn (`bool`, *optional*, defaults to `True`): Whether to use the normalized feed-forward sub-layer in Mega blocks (`True`) or pass Mega encoder output as-is (`False`) nffn_hidden_size (`int`, *optional*, defaults to 256):
238_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
as-is (`False`) nffn_hidden_size (`int`, *optional*, defaults to 256): If using the normalized feed-forward network (NFFN) layer within Mega (`use_normalized_ffn = True`), this is the hidden size of the NFFN normalize_before_ffn (`bool`, *optional*, defaults to `True`): Whether to normalize before (`True`) or after (`False`) the feed-forward portion of NFFN nffn_activation_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the NFFN component.
238_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
nffn_activation_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the NFFN component. max_positions (`int`, *optional*, defaults to 2048): The maximum sequence length to use for positional representations. For `"simple"` relative positional bias, this is a hard limit on input length; `"rotary"` relative positional bias will extrapolate to longer sequences add_token_type_embeddings (`bool`, *optional*, defaults to `True`):
238_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
sequences add_token_type_embeddings (`bool`, *optional*, defaults to `True`): Whether to account for token types in embeddings. Left as optional to maintain compatibility with original implementation while adding support for token types. type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`MegaModel`]. Only used if `add_token_type_embeddings = True` initializer_range (`float`, *optional*, defaults to 0.02):
238_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
`add_token_type_embeddings = True` initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. ema_delta_alpha_range (`float`, *optional*, defaults to 0.2): The standard deviation for initializing the delta (damping factor) and alpha (decay factor) parameters in MegaMultiDimensionDampedEma. ema_beta_range (`float`, *optional*, defaults to 0.02):
238_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
MegaMultiDimensionDampedEma. ema_beta_range (`float`, *optional*, defaults to 0.02): The standard deviation for initializing the beta parameter (expansion matrix) in MegaMultiDimensionDampedEma. ema_gamma_omega_range (`float`, *optional*, defaults to 1.0): The standard deviation for initializing the gamma (projection matrix) and omega (residual weight) parameters in MultiDimensionEMA. relative_positional_bias (`str`, *optional*, defaults to `"rotary"`):
238_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
parameters in MultiDimensionEMA. relative_positional_bias (`str`, *optional*, defaults to `"rotary"`): Type of relative positional encoding. Choose one of `"rotary"` or `"simple"`. If `"simple"` is selected, `max_positions` is used as a limit on input size, while `"rotary"` extrapolates beyond `max_positions`. is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`):
238_5_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. add_lm_hidden_dense_layer (`bool`, *optional*, defaults to `True`): Whether to include a hidden layer for projection between encoder outputs and LM heads (`True`) or pass
238_5_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
Whether to include a hidden layer for projection between encoder outputs and LM heads (`True`) or pass hidden states directly to LM head (`False`). Remains optional for compatibility with original implementation Examples: ```python >>> from transformers import MegaConfig, MegaModel
238_5_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaconfig
.md
>>> # Initializing a Mega configuration >>> configuration = MegaConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = MegaModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
238_5_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megamodel
.md
The bare MEGA Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
238_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megamodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MegaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
238_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megamodel
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added after self-attention, following the architecture described in *Mega: Moving Average
238_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megamodel
.md
cross-attention is added after self-attention, following the architecture described in *Mega: Moving Average Equipped Gated Attention*_ by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer To behave as a decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True` and `bidirectional` set to `False`. To be used in a Seq2Seq model, the model needs to initialized with both
238_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megamodel
.md
`True` and `bidirectional` set to `False`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder=True` and `bidirectional=False` argument as well as `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. .. _*Mega: Moving Average Equipped Gated Attention*: https://arxiv.org/abs/2209.10655 Methods: forward
238_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaforcausallm
.md
MEGA Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
238_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaforcausallm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MegaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
238_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaforcausallm
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
238_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaformaskedlm
.md
MEGA Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
238_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaformaskedlm
.md
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MegaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
238_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaforsequenceclassification
.md
MEGA Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
238_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MegaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
238_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaforsequenceclassification
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
238_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaformultiplechoice
.md
MEGA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
238_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaformultiplechoice
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MegaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
238_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaformultiplechoice
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
238_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megafortokenclassification
.md
MEGA Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
238_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megafortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MegaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
238_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megafortokenclassification
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
238_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaforquestionanswering
.md
MEGA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
238_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MegaConfig`]): Model configuration class with all the parameters of the
238_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mega.md
https://huggingface.co/docs/transformers/en/model_doc/mega/#megaforquestionanswering
.md
and behavior. Parameters: config ([`MegaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
238_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
239_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
239_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#overview
.md
The CodeGen model was proposed in [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. CodeGen is an autoregressive language model for program synthesis trained sequentially on [The Pile](https://pile.eleuther.ai/), BigQuery, and BigPython. The abstract from the paper is the following:
239_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#overview
.md
*Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in
239_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#overview
.md
as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To
239_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#overview
.md
scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In
239_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#overview
.md
emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We make the training library JaxFormer including checkpoints available as open source contribution: [this https URL](https://github.com/salesforce/codegen).*
239_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#overview
.md
This model was contributed by [Hiroaki Hayashi](https://huggingface.co/rooa). The original code can be found [here](https://github.com/salesforce/codegen).
239_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#checkpoint-naming
.md
* CodeGen model [checkpoints](https://huggingface.co/models?other=codegen) are available on different pre-training data with variable sizes. * The format is: `Salesforce/codegen-{size}-{data}`, where * `size`: `350M`, `2B`, `6B`, `16B` * `data`: * `nl`: Pre-trained on the Pile * `multi`: Initialized with `nl`, then further pre-trained on multiple programming languages data * `mono`: Initialized with `multi`, then further pre-trained on Python data
239_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#checkpoint-naming
.md
* `mono`: Initialized with `multi`, then further pre-trained on Python data * For example, `Salesforce/codegen-350M-mono` offers a 350 million-parameter checkpoint pre-trained sequentially on the Pile, multiple programming languages, and Python.
239_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#usage-example
.md
```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> checkpoint = "Salesforce/codegen-350M-mono" >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> text = "def hello_world():" >>> completion = model.generate(**tokenizer(text, return_tensors="pt")) >>> print(tokenizer.decode(completion[0])) def hello_world(): print("Hello World") hello_world() ```
239_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#resources
.md
- [Causal language modeling task guide](../tasks/language_modeling)
239_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenconfig
.md
This is the configuration class to store the configuration of a [`CodeGenModel`]. It is used to instantiate a CodeGen model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CodeGen [Salesforce/codegen-2B-mono](https://huggingface.co/Salesforce/codegen-2B-mono) architecture. Configuration objects
239_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenconfig
.md
[Salesforce/codegen-2B-mono](https://huggingface.co/Salesforce/codegen-2B-mono) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50400): Vocabulary size of the CodeGen model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`CodeGenModel`].
239_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenconfig
.md
`inputs_ids` passed when calling [`CodeGenModel`]. n_positions (`int`, *optional*, defaults to 2048): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_ctx (`int`, *optional*, defaults to 2048): This attribute is used in `CodeGenModel.__init__` without any real effect. n_embd (`int`, *optional*, defaults to 4096): Dimensionality of the embeddings and hidden states.
239_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenconfig
.md
n_embd (`int`, *optional*, defaults to 4096): Dimensionality of the embeddings and hidden states. n_layer (`int`, *optional*, defaults to 28): Number of hidden layers in the Transformer encoder. n_head (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. rotary_dim (`int`, *optional*, defaults to 64): Number of dimensions in the embedding that Rotary Position Embedding is applied to. n_inner (`int`, *optional*):
239_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenconfig
.md
Number of dimensions in the embedding that Rotary Position Embedding is applied to. n_inner (`int`, *optional*): Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd activation_function (`str`, *optional*, defaults to `"gelu_new"`): Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`. resid_pdrop (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
239_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (`int`, *optional*, defaults to 0.0): The dropout ratio for the embeddings. attn_pdrop (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention. layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): The epsilon to use in the layer normalization layers. initializer_range (`float`, *optional*, defaults to 0.02):
239_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenconfig
.md
The epsilon to use in the layer normalization layers. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). bos_token_id (`int`, *optional*, defaults to 50256): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 50256):
239_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenconfig
.md
Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 50256): End of stream token id. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the model has a output word embedding layer. Example: ```python >>> from transformers import CodeGenConfig, CodeGenModel
239_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenconfig
.md
>>> # Initializing a CodeGen 6B configuration >>> configuration = CodeGenConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = CodeGenModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` Methods: all
239_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizer
.md
Construct a CodeGen tokenizer. Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: ```python >>> from transformers import CodeGenTokenizer >>> tokenizer = CodeGenTokenizer.from_pretrained("Salesforce/codegen-350M-mono") >>> tokenizer("Hello world")["input_ids"] [15496, 995]
239_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizer
.md
>>> tokenizer(" Hello world")["input_ids"] [18435, 995] ``` You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. <Tip> When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). </Tip>
239_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizer
.md
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). </Tip> This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See
239_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizer
.md
errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The beginning of sequence token.
239_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizer
.md
token instead. bos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The beginning of sequence token. eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The end of sequence token. pad_token (`str`, *optional*): The token used for padding, for example when batching sequences of different lengths. add_prefix_space (`bool`, *optional*, defaults to `False`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any
239_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizer
.md
Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (CodeGen tokenizer detect beginning of words by the preceding space). add_bos_token (`bool`, *optional*, defaults to `False`): Whether to add a beginning of sequence token at the start of sequences. return_token_type_ids (`bool`, *optional*, defaults to `False`): Whether to return token type IDs. Methods: create_token_type_ids_from_sequences - save_vocabulary
239_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizerfast
.md
Construct a "fast" CodeGen tokenizer (backed by HuggingFace's *tokenizers* library). Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: ```python >>> from transformers import CodeGenTokenizerFast
239_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizerfast
.md
>>> tokenizer = CodeGenTokenizerFast.from_pretrained("Salesforce/codegen-350M-mono") >>> tokenizer("Hello world")["input_ids"] [15496, 995]
239_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizerfast
.md
>>> tokenizer(" Hello world")["input_ids"] [18435, 995] ``` You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer, but since the model was not pretrained this way, it might yield a decrease in performance. <Tip> When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`. </Tip> This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
239_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizerfast
.md
</Tip> This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`, *optional*): Path to the vocabulary file. merges_file (`str`, *optional*): Path to the merges file. tokenizer_file (`str`, *optional*): Path to [tokenizers](https://github.com/huggingface/tokenizers) file (generally has a .json extension) that contains everything needed to load the tokenizer.
239_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizerfast
.md
contains everything needed to load the tokenizer. unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The beginning of sequence token. eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The end of sequence token. add_prefix_space (`bool`, *optional*, defaults to `False`):
239_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegentokenizerfast
.md
The end of sequence token. add_prefix_space (`bool`, *optional*, defaults to `False`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (CodeGen tokenizer detect beginning of words by the preceding space). return_token_type_ids (`bool`, *optional*, defaults to `False`): Whether to return token type IDs.
239_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenmodel
.md
The bare CodeGen Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`CodeGenConfig`]): Model configuration class with all the parameters of the model.
239_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenmodel
.md
behavior. Parameters: config ([`CodeGenConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
239_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenforcausallm
.md
The CodeGen Model transformer with a language modeling head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`CodeGenConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
239_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/codegen.md
https://huggingface.co/docs/transformers/en/model_doc/codegen/#codegenforcausallm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
239_9_1