source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertforquestionanswering
.md
QDQBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
347_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`QDQBertConfig`]): Model configuration class with all the parameters of the model.
347_16_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertforquestionanswering
.md
and behavior. Parameters: config ([`QDQBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
347_16_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
348_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
348_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#overview
.md
The Aria model was proposed in [Aria: An Open Multimodal Native Mixture-of-Experts Model](https://huggingface.co/papers/2410.05993) by Li et al. from the Rhymes.AI team. Aria is an open multimodal-native model with best-in-class performance across a wide range of multimodal, language, and coding tasks. It has a Mixture-of-Experts architecture, with respectively 3.9B and 3.5B activated parameters per visual token and text token. The abstract from the paper is the following:
348_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#overview
.md
*Information comes in diverse modalities. Multimodal native AI models are essential to integrate real-world information and deliver comprehensive understanding. While proprietary multimodal native models exist, their lack of openness imposes obstacles for adoptions, let alone adaptations. To fill this gap, we introduce Aria, an open multimodal native model with best-in-class performance across a wide range of multimodal, language, and coding tasks. Aria is a mixture-of-expert model with 3.9B and 3.5B
348_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#overview
.md
performance across a wide range of multimodal, language, and coding tasks. Aria is a mixture-of-expert model with 3.9B and 3.5B activated parameters per visual token and text token, respectively. It outperforms Pixtral-12B and Llama3.2-11B, and is competitive against the best proprietary models on various multimodal tasks. We pre-train Aria from scratch following a 4-stage pipeline, which progressively equips the model with strong capabilities in language understanding, multimodal understanding, long
348_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#overview
.md
which progressively equips the model with strong capabilities in language understanding, multimodal understanding, long context window, and instruction following. We open-source the model weights along with a codebase that facilitates easy adoptions and adaptations of Aria in real-world applications.*
348_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#overview
.md
This model was contributed by [m-ric](https://huggingface.co/m-ric). The original code can be found [here](https://github.com/rhymes-ai/Aria).
348_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#usage-tips
.md
Here's how to use the model for vision tasks: ```python import requests import torch from PIL import Image from transformers import AriaProcessor, AriaForConditionalGeneration model_id_or_path = "rhymes-ai/Aria" model = AriaForConditionalGeneration.from_pretrained( model_id_or_path, device_map="auto" ) processor = AriaProcessor.from_pretrained(model_id_or_path) image = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw)
348_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#usage-tips
.md
image = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw) messages = [ { "role": "user", "content": [ {"type": "image"}, {"text": "what is the image?", "type": "text"}, ], } ] text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(text=text, images=image, return_tensors="pt") inputs.to(model.device)
348_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#usage-tips
.md
output = model.generate( **inputs, max_new_tokens=15, stop_strings=["<|im_end|>"], tokenizer=processor.tokenizer, do_sample=True, temperature=0.9, ) output_ids = output[0][inputs["input_ids"].shape[1]:] response = processor.decode(output_ids, skip_special_tokens=True) ```
348_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaimageprocessor
.md
A vision processor for the Aria model that handles image preprocessing. Initialize the AriaImageProcessor. Args: image_mean (`list`, *optional*, defaults to [0.5, 0.5, 0.5]): Mean values for normalization. image_std (`list`, *optional*, defaults to [0.5, 0.5, 0.5]): Standard deviation values for normalization. max_image_size (`int`, *optional*, defaults to 980): Maximum image size. min_image_size (`int`, *optional*, defaults to 336): Minimum image size.
348_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaimageprocessor
.md
Maximum image size. min_image_size (`int`, *optional*, defaults to 336): Minimum image size. split_resolutions (`list`, *optional*, defaults to a list of optimal,resolutions as tuples): The optimal resolutions for splitting the image. split_image (`bool`, *optional*, defaults to `False`): Whether to split the image. do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image.
348_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaimageprocessor
.md
Whether to convert the image to RGB. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. resample (PILImageResampling, *optional*, defaults to `BICUBIC`): The resampling filter to use if resizing the image.
348_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaprocessor
.md
AriaProcessor is a processor for the Aria model which wraps the Aria image preprocessor and the LLama slow tokenizer. Args: image_processor (`AriaImageProcessor`, *optional*): The AriaImageProcessor to use for image preprocessing. tokenizer (`PreTrainedTokenizerBase`, *optional*): An instance of [`PreTrainedTokenizerBase`]. This should correspond with the model's text model. The tokenizer is a required input. chat_template (`str`, *optional*):
348_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaprocessor
.md
chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string. size_conversion (`Dict`, *optional*): A dictionary indicating size conversions for images.
348_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
This class handles the configuration for the text component of the Aria model. Instantiating a configuration with the defaults will yield a similar configuration to that of the model of the Aria [rhymes-ai/Aria](https://huggingface.co/rhymes-ai/Aria) architecture. This class extends the LlamaConfig to include additional parameters specific to the Mixture of Experts (MoE) architecture. Args: vocab_size (`int`, *optional*, defaults to 32000):
348_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
Args: vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the LLaMA model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`LlamaModel`] hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 4096): The size of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer decoder.
348_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer decoder. num_key_value_heads (`int`, *optional*): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
348_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`.
348_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 2048): The maximum sequence length that this model might ever be used with. Llama 1 supports up to 2048 tokens, Llama 2 up to 4096, CodeLlama up to 16384.
348_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
Llama 2 up to 4096, CodeLlama up to 16384. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`.
348_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*, defaults to 2): Padding token id. bos_token_id (`int`, *optional*, defaults to 1): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 2): End of stream token id. pretraining_tp (`int`, *optional*, defaults to 1): Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this document](https://huggingface.co/docs/transformers/main/perf_train_gpu_many#tensor-parallelism) to
348_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
document](https://huggingface.co/docs/transformers/main/perf_train_gpu_many#tensor-parallelism) to understand more about it. This value is necessary to ensure exact reproducibility of the pretraining results. Please refer to [this issue](https://github.com/pytorch/pytorch/issues/76232). tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings.
348_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value accordingly. Expected contents: `rope_type` (`str`):
348_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
accordingly. Expected contents: `rope_type` (`str`): The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope', 'llama3'], with 'default' being the original RoPE implementation. `factor` (`float`, *optional*): Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In most scaling types, a `factor` of x will enable the model to handle sequences of length x * original maximum pre-trained length.
348_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
original maximum pre-trained length. `original_max_position_embeddings` (`int`, *optional*): Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining. `attention_factor` (`float`, *optional*): Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention computation. If unspecified, it defaults to value recommended by the implementation, using the `factor` field to infer the suggested value. `beta_fast` (`float`, *optional*):
348_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
`factor` field to infer the suggested value. `beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear ramp function. If unspecified, it defaults to 32. `beta_slow` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*):
348_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (<
348_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
`long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `low_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE `high_freq_factor` (`float`, *optional*):
348_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
`high_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE attention_bias (`bool`, *optional*, defaults to `False`): Whether to use a bias in the query, key, value and output projection layers during self-attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. mlp_bias (`bool`, *optional*, defaults to `False`):
348_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
The dropout ratio for the attention probabilities. mlp_bias (`bool`, *optional*, defaults to `False`): Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers. head_dim (`int`, *optional*): The attention head dimension. If None, it will default to hidden_size // num_heads moe_num_experts (`int`, *optional*, defaults to 8): The number of experts in the MoE layer. moe_topk (`int`, *optional*, defaults to 2): The number of top experts to route to for each token.
348_5_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextconfig
.md
moe_topk (`int`, *optional*, defaults to 2): The number of top experts to route to for each token. moe_num_shared_experts (`int`, *optional*, defaults to 2): The number of shared experts.
348_5_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaconfig
.md
This class handles the configuration for both vision and text components of the Aria model, as well as additional parameters for image token handling and projector mapping. Instantiating a configuration with the defaults will yield a similar configuration to that of the model of the Aria [rhymes-ai/Aria](https://huggingface.co/rhymes-ai/Aria) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
348_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vision_config (`AriaVisionConfig` or `dict`, *optional*): Configuration for the vision component. vision_feature_layer (`int`, *optional*, defaults to -1): The index of the layer to select the vision feature. text_config (`AriaTextConfig` or `dict`, *optional*): Configuration for the text component.
348_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaconfig
.md
text_config (`AriaTextConfig` or `dict`, *optional*): Configuration for the text component. projector_patch_to_query_dict (`dict`, *optional*): Mapping of patch sizes to query dimensions. image_token_index (`int`, *optional*, defaults to 9): Index used to represent image tokens. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated normal initializer for initializing all weight matrices. Attributes: model_type (`str`): Type of the model, set to `"aria"`.
348_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaconfig
.md
Attributes: model_type (`str`): Type of the model, set to `"aria"`. image_token_index (`int`): Index used to represent image tokens. projector_patch_to_query_dict (`dict`): Mapping of patch sizes to query dimensions. vision_config (`AriaVisionConfig`): Configuration for the vision component. text_config (`AriaTextConfig`): Configuration for the text component.
348_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextmodel
.md
The bare AriaText Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
348_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`AriaTextConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
348_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`AriaTextDecoderLayer`] Args: config: AriaTextConfig
348_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariatextforcausallm
.md
Aria model for causal language modeling tasks. This class extends `LlamaForCausalLM` to incorporate the Mixture of Experts (MoE) approach, allowing for more efficient and scalable language modeling. Args: config (`AriaTextConfig`): Configuration object for the model.
348_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaforconditionalgeneration
.md
Aria model for conditional generation tasks. This model combines a vision tower, a multi-modal projector, and a language model to perform tasks that involve both image and text inputs. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
348_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaforconditionalgeneration
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config (`AriaConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not
348_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/aria.md
https://huggingface.co/docs/transformers/en/model_doc/aria/#ariaforconditionalgeneration
.md
config (`AriaConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
348_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
349_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
349_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#overview
.md
The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using [`EncoderDecoderModel`] as proposed in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. The abstract from the paper is the following: *Unsupervised pretraining of large neural models has recently revolutionized Natural Language Processing. By
349_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#overview
.md
*Unsupervised pretraining of large neural models has recently revolutionized Natural Language Processing. By warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We
349_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#overview
.md
Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT, GPT-2 and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation,
349_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#overview
.md
encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation, Text Summarization, Sentence Splitting, and Sentence Fusion.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder).
349_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#usage-examples-and-tips
.md
The model can be used in combination with the [`EncoderDecoderModel`] to leverage two pretrained BERT checkpoints for subsequent fine-tuning: ```python >>> # leverage checkpoints for Bert2Bert model... >>> # use BERT's cls token as BOS token and sep token as EOS token >>> encoder = BertGenerationEncoder.from_pretrained("google-bert/bert-large-uncased", bos_token_id=101, eos_token_id=102) >>> # add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token
349_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#usage-examples-and-tips
.md
>>> # add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token >>> decoder = BertGenerationDecoder.from_pretrained( ... "google-bert/bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102 ... ) >>> bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder)
349_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#usage-examples-and-tips
.md
>>> # create tokenizer... >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-large-uncased") >>> input_ids = tokenizer( ... "This is a long article to summarize", add_special_tokens=False, return_tensors="pt" ... ).input_ids >>> labels = tokenizer("This is a short summary", return_tensors="pt").input_ids
349_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#usage-examples-and-tips
.md
>>> # train... >>> loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss >>> loss.backward() ``` Pretrained [`EncoderDecoderModel`] are also directly available in the model hub, e.g.: ```python >>> # instantiate sentence fusion model >>> sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse") >>> tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
349_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#usage-examples-and-tips
.md
>>> input_ids = tokenizer( ... "This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt" ... ).input_ids >>> outputs = sentence_fuser.generate(input_ids)
349_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#usage-examples-and-tips
.md
>>> outputs = sentence_fuser.generate(input_ids) >>> print(tokenizer.decode(outputs[0])) ``` Tips: - [`BertGenerationEncoder`] and [`BertGenerationDecoder`] should be used in combination with [`EncoderDecoder`]. - For summarization, sentence splitting, sentence fusion and translation, no special tokens are required for the input. Therefore, no EOS token should be added to the end of the input.
349_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationconfig
.md
This is the configuration class to store the configuration of a [`BertGenerationPreTrainedModel`]. It is used to instantiate a BertGeneration model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BertGeneration [google/bert_for_seq_generation_L-24_bbc_encoder](https://huggingface.co/google/bert_for_seq_generation_L-24_bbc_encoder) architecture.
349_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationconfig
.md
architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50358): Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`BertGeneration`]. hidden_size (`int`, *optional*, defaults to 1024):
349_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationconfig
.md
`inputs_ids` passed when calling [`BertGeneration`]. hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 4096):
349_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationconfig
.md
intermediate_size (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often called feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
349_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationconfig
.md
`"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large
349_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationconfig
.md
The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 0): Padding token id.
349_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationconfig
.md
The epsilon used by the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 0): Padding token id. bos_token_id (`int`, *optional*, defaults to 2): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 1): End of stream token id. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
349_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationconfig
.md
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
349_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationconfig
.md
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. Examples: ```python >>> from transformers import BertGenerationConfig, BertGenerationEncoder
349_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationconfig
.md
>>> # Initializing a BertGeneration config >>> configuration = BertGenerationConfig() >>> # Initializing a model (with random weights) from the config >>> model = BertGenerationEncoder(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
349_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationtokenizer
.md
Construct a BertGeneration tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that contains the vocabulary necessary to instantiate a tokenizer.
349_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationtokenizer
.md
contains the vocabulary necessary to instantiate a tokenizer. bos_token (`str`, *optional*, defaults to `"<s>"`): The begin of sequence token. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`):
349_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationtokenizer
.md
token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. sep_token (`str`, *optional*, defaults to `"<::::>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. sp_model_kwargs (`dict`, *optional*):
349_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationtokenizer
.md
token of a sequence built with special tokens. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed.
349_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationtokenizer
.md
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Methods: save_vocabulary
349_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationencoder
.md
The bare BertGeneration model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
349_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationencoder
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertGenerationConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
349_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationencoder
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in [Attention is
349_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationencoder
.md
cross-attention is added between the self-attention layers, following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. This model should be used when leveraging Bert or Roberta checkpoints for the [`EncoderDecoderModel`] class as
349_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationencoder
.md
This model should be used when leveraging Bert or Roberta checkpoints for the [`EncoderDecoderModel`] class as described in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
349_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationencoder
.md
to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. Methods: forward
349_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationdecoder
.md
BertGeneration Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
349_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationdecoder
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BertGenerationConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
349_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-generation.md
https://huggingface.co/docs/transformers/en/model_doc/bert-generation/#bertgenerationdecoder
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
349_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
350_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
350_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transformer-xl
.md
<Tip warning={true}> This model is in maintenance mode only, so we won't accept any new PRs changing its code. This model was deprecated due to security issues linked to `pickle.load`. We recommend switching to more recent models for improved security. In case you would still like to use `TransfoXL` in your experiments, we recommend using the [Hub checkpoint](https://huggingface.co/transfo-xl/transfo-xl-wt103) with a specific revision to ensure you are downloading safe files from the Hub.
350_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transformer-xl
.md
You will need to set the environment variable `TRUST_REMOTE_CODE` to `True` in order to allow the usage of `pickle.load()`: ```python import os from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
350_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transformer-xl
.md
os.environ["TRUST_REMOTE_CODE"] = "True" checkpoint = 'transfo-xl/transfo-xl-wt103' revision = '40a186da79458c9f9de846edfaea79c412137f97'
350_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transformer-xl
.md
tokenizer = TransfoXLTokenizer.from_pretrained(checkpoint, revision=revision) model = TransfoXLLMHeadModel.from_pretrained(checkpoint, revision=revision) ``` If you run into any issues running this model, please reinstall the last version that supported this model: v4.35.0. You can do so by running the following command: `pip install -U transformers==4.35.0`. </Tip> <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=transfo-xl">
350_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transformer-xl
.md
</Tip> <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=transfo-xl"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-transfo--xl-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/transfo-xl-wt103"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div>
350_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#overview
.md
The Transformer-XL model was proposed in [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied).
350_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#overview
.md
inputs and outputs (tied). The abstract from the paper is the following: *Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a
350_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#overview
.md
beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+
350_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#overview
.md
longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens.*
350_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#overview
.md
coherent, novel text articles with thousands of tokens.* This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/kimiyoung/transformer-xl).
350_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#usage-tips
.md
- Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left. - Transformer-XL is one of the few models that has no sequence length limit.
350_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#usage-tips
.md
- Transformer-XL is one of the few models that has no sequence length limit. - Same as a regular GPT model, but introduces a recurrence mechanism for two consecutive segments (similar to a regular RNNs with two consecutive inputs). In this context, a segment is a number of consecutive tokens (for instance 512) that may span across multiple documents, and segments are fed in order to the model.
350_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#usage-tips
.md
- Basically, the hidden states of the previous segment are concatenated to the current input to compute the attention scores. This allows the model to pay attention to information that was in the previous segment as well as the current one. By stacking multiple attention layers, the receptive field can be increased to multiple previous segments.
350_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#usage-tips
.md
- This changes the positional embeddings to positional relative embeddings (as the regular positional embeddings would give the same results in the current input and the current hidden state at a given position) and needs to make some adjustments in the way attention scores are computed. <Tip warning={true}> TransformerXL does **not** work with *torch.nn.DataParallel* due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035) </Tip>
350_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling)
350_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlconfig
.md
This is the configuration class to store the configuration of a [`TransfoXLModel`] or a [`TFTransfoXLModel`]. It is used to instantiate a Transformer-XL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the TransfoXL [transfo-xl/transfo-xl-wt103](https://huggingface.co/transfo-xl/transfo-xl-wt103) architecture.
350_5_0