source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#overview
.md
speed-ups and memory savings and often outperforms other efficient self-attention methods. Our code is available at this https URL* This model was contributed by [novice03](https://huggingface.co/novice03). The original code can be found [here](https://github.com/mlpen/YOSO).
388_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#usage-tips
.md
- The YOSO attention algorithm is implemented through custom CUDA kernels, functions written in CUDA C++ that can be executed multiple times in parallel on a GPU. - The kernels provide a `fast_hash` function, which approximates the random projections of the queries and keys using the Fast Hadamard Transform. Using these hash codes, the `lsh_cumulation` function approximates self-attention via LSH-based Bernoulli sampling.
388_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#usage-tips
.md
hash codes, the `lsh_cumulation` function approximates self-attention via LSH-based Bernoulli sampling. - To use the custom kernels, the user should set `config.use_expectation = False`. To ensure that the kernels are compiled successfully, the user must install the correct version of PyTorch and cudatoolkit. By default, `config.use_expectation = True`, which uses YOSO-E and does not require compiling CUDA kernels.
388_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#usage-tips
.md
does not require compiling CUDA kernels. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/yoso_architecture.jpg" alt="drawing" width="600"/> <small> YOSO Attention Algorithm. Taken from the <a href="https://arxiv.org/abs/2111.09714">original paper</a>.</small>
388_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
388_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoconfig
.md
This is the configuration class to store the configuration of a [`YosoModel`]. It is used to instantiate an YOSO model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the YOSO [uw-madison/yoso-4096](https://huggingface.co/uw-madison/yoso-4096) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
388_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50265): Vocabulary size of the YOSO model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`YosoModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer.
388_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoconfig
.md
hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
388_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoconfig
.md
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
388_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2):
388_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoconfig
.md
just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`YosoModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
388_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoconfig
.md
The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. use_expectation (`bool`, *optional*, defaults to `True`): Whether or not to use YOSO Expectation. Overrides any effect of num_hash. hash_code_len (`int`, *optional*, defaults to 9): The length of hashes generated by the hash functions. num_hash (`int`, *optional*, defaults to 64):
388_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoconfig
.md
The length of hashes generated by the hash functions. num_hash (`int`, *optional*, defaults to 64): Number of hash functions used in [`YosoSelfAttention`]. conv_window (`int`, *optional*): Kernel size of depth-wise convolution. use_fast_hash (`bool`, *optional*, defaults to `False`): Whether or not to use custom cuda kernels which perform fast random projection via hadamard transform. lsh_backward (`bool`, *optional*, defaults to `True`):
388_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoconfig
.md
lsh_backward (`bool`, *optional*, defaults to `True`): Whether or not to perform backpropagation using Locality Sensitive Hashing. Example: ```python >>> from transformers import YosoConfig, YosoModel
388_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoconfig
.md
>>> # Initializing a YOSO uw-madison/yoso-4096 style configuration >>> configuration = YosoConfig() >>> # Initializing a model (with random weights) from the uw-madison/yoso-4096 style configuration >>> model = YosoModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
388_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosomodel
.md
The bare YOSO Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`YosoConfig`]): Model configuration class with all the parameters of the model.
388_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosomodel
.md
behavior. Parameters: config ([`YosoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
388_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoformaskedlm
.md
YOSO Model with a `language modeling` head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`YosoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
388_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoformaskedlm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
388_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoforsequenceclassification
.md
YOSO Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`YosoConfig`]): Model configuration class with all the parameters of the model.
388_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoforsequenceclassification
.md
behavior. Parameters: config ([`YosoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
388_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoformultiplechoice
.md
YOSO Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`YosoConfig`]): Model configuration class with all the parameters of the model.
388_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoformultiplechoice
.md
behavior. Parameters: config ([`YosoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
388_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosofortokenclassification
.md
YOSO Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`YosoConfig`]): Model configuration class with all the parameters of the model.
388_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosofortokenclassification
.md
behavior. Parameters: config ([`YosoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
388_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoforquestionanswering
.md
YOSO Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
388_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#yosoforquestionanswering
.md
behavior. Parameters: config ([`YosoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
388_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
389_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
389_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectory-transformer
.md
<Tip warning={true}> This model is in maintenance mode only, so we won't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. </Tip>
389_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#overview
.md
The Trajectory Transformer model was proposed in [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine. The abstract from the paper is the following: *Reinforcement learning (RL) is typically concerned with estimating stationary policies or single-step models, leveraging the Markov property to factorize problems in time. However, we can also view RL as a generic sequence
389_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#overview
.md
leveraging the Markov property to factorize problems in time. However, we can also view RL as a generic sequence modeling problem, with the goal being to produce a sequence of actions that leads to a sequence of high rewards. Viewed in this way, it is tempting to consider whether high-capacity sequence prediction models that work well in other domains, such as natural-language processing, can also provide effective solutions to the RL problem.
389_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#overview
.md
in other domains, such as natural-language processing, can also provide effective solutions to the RL problem. To this end, we explore how RL can be tackled with the tools of sequence modeling, using a Transformer architecture to model distributions over trajectories and repurposing beam search as a planning algorithm. Framing RL as sequence modeling problem simplifies a range of design decisions, allowing us to dispense with many of the components common
389_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#overview
.md
modeling problem simplifies a range of design decisions, allowing us to dispense with many of the components common in offline RL algorithms. We demonstrate the flexibility of this approach across long-horizon dynamics prediction, imitation learning, goal-conditioned RL, and offline RL. Further, we show that this approach can be combined with existing model-free algorithms to yield a state-of-the-art planner in sparse-reward, long-horizon tasks.*
389_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#overview
.md
existing model-free algorithms to yield a state-of-the-art planner in sparse-reward, long-horizon tasks.* This model was contributed by [CarlCochet](https://huggingface.co/CarlCochet). The original code can be found [here](https://github.com/jannerm/trajectory-transformer).
389_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#usage-tips
.md
This Transformer is used for deep reinforcement learning. To use it, you need to create sequences from actions, states and rewards from all previous timesteps. This model will treat all these elements together as one big sequence (a trajectory).
389_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformerconfig
.md
This is the configuration class to store the configuration of a [`TrajectoryTransformerModel`]. It is used to instantiate an TrajectoryTransformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the TrajectoryTransformer [CarlCochet/trajectory-transformer-halfcheetah-medium-v2](https://huggingface.co/CarlCochet/trajectory-transformer-halfcheetah-medium-v2) architecture.
389_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformerconfig
.md
architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 100): Vocabulary size of the TrajectoryTransformer model. Defines the number of different tokens that can be represented by the `trajectories` passed when calling [`TrajectoryTransformerModel`] action_weight (`int`, *optional*, defaults to 5):
389_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformerconfig
.md
action_weight (`int`, *optional*, defaults to 5): Weight of the action in the loss function reward_weight (`int`, *optional*, defaults to 1): Weight of the reward in the loss function value_weight (`int`, *optional*, defaults to 1): Weight of the value in the loss function block_size (`int`, *optional*, defaults to 249): Size of the blocks in the trajectory transformer. action_dim (`int`, *optional*, defaults to 6): Dimension of the action space. observation_dim (`int`, *optional*, defaults to 17):
389_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformerconfig
.md
Dimension of the action space. observation_dim (`int`, *optional*, defaults to 17): Dimension of the observation space. transition_dim (`int`, *optional*, defaults to 25): Dimension of the transition space. n_layer (`int`, *optional*, defaults to 4): Number of hidden layers in the Transformer encoder. n_head (`int`, *optional*, defaults to 4): Number of attention heads for each attention layer in the Transformer encoder. n_embd (`int`, *optional*, defaults to 128):
389_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformerconfig
.md
Number of attention heads for each attention layer in the Transformer encoder. n_embd (`int`, *optional*, defaults to 128): Dimensionality of the embeddings and hidden states. resid_pdrop (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (`int`, *optional*, defaults to 0.1): The dropout ratio for the embeddings. attn_pdrop (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention.
389_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformerconfig
.md
The dropout ratio for the embeddings. attn_pdrop (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. max_position_embeddings (`int`, *optional*, defaults to 512):
389_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformerconfig
.md
`"relu"`, `"selu"` and `"gelu_new"` are supported. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12):
389_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformerconfig
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. kaiming_initializer_range (`float, *optional*, defaults to 1): A coefficient scaling the negative slope of the kaiming initializer rectifier for EinLinear layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. Example: ```python
389_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformerconfig
.md
relevant if `config.is_decoder=True`. Example: ```python >>> from transformers import TrajectoryTransformerConfig, TrajectoryTransformerModel
389_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformerconfig
.md
>>> # Initializing a TrajectoryTransformer CarlCochet/trajectory-transformer-halfcheetah-medium-v2 style configuration >>> configuration = TrajectoryTransformerConfig() >>> # Initializing a model (with random weights) from the CarlCochet/trajectory-transformer-halfcheetah-medium-v2 style configuration >>> model = TrajectoryTransformerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
389_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformermodel
.md
The bare TrajectoryTransformer Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`TrajectoryTransformerConfig`]): Model configuration class with all the parameters of the model.
389_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trajectory_transformer.md
https://huggingface.co/docs/transformers/en/model_doc/trajectory_transformer/#trajectorytransformermodel
.md
Parameters: config ([`TrajectoryTransformerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. the full GPT language model, with a context size of block_size Methods: forward
389_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
390_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
390_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#overview
.md
`StableLM 3B 4E1T` was proposed in [`StableLM 3B 4E1T`: Technical Report](https://stability.wandb.io/stability-llm/stable-lm/reports/StableLM-3B-4E1T--VmlldzoyMjU4?accessToken=u3zujipenkx5g7rtcj9qojjgxpconyjktjkli2po09nffrffdhhchq045vp0wyfo) by Stability AI and is the first model in a series of multi-epoch pre-trained language models.
390_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#model-details
.md
`StableLM 3B 4E1T` is a decoder-only base language model pre-trained on 1 trillion tokens of diverse English and code datasets for four epochs. The model architecture is transformer-based with partial Rotary Position Embeddings, SwiGLU activation, LayerNorm, etc. We also provide `StableLM Zephyr 3B`, an instruction fine-tuned version of the model that can be used for chat-based applications.
390_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#usage-tips
.md
- The architecture is similar to LLaMA but with RoPE applied to 25% of head embedding dimensions, LayerNorm instead of RMSNorm, and optional QKV bias terms. - `StableLM 3B 4E1T`-based models uses the same tokenizer as [`GPTNeoXTokenizerFast`]. `StableLM 3B 4E1T` and `StableLM Zephyr 3B` can be found on the [Huggingface Hub](https://huggingface.co/stabilityai) The following code snippet demonstrates how to use `StableLM 3B 4E1T` for inference: ```python
390_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#usage-tips
.md
The following code snippet demonstrates how to use `StableLM 3B 4E1T` for inference: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> device = "cuda" # the device to load the model onto
390_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#usage-tips
.md
>>> set_seed(0) >>> tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t") >>> model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-3b-4e1t") >>> model.to(device) # doctest: +IGNORE_RESULT >>> model_inputs = tokenizer("The weather is always wonderful in", return_tensors="pt").to(model.device)
390_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#usage-tips
.md
>>> model_inputs = tokenizer("The weather is always wonderful in", return_tensors="pt").to(model.device) >>> generated_ids = model.generate(**model_inputs, max_length=32, do_sample=True) >>> responses = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) >>> responses ['The weather is always wonderful in Costa Rica, which makes it a prime destination for retirees. That’s where the Pensionado program comes in, offering'] ```
390_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#combining-stablelm-and-flash-attention-2
.md
First, make sure to install the latest version of Flash Attention v2. ```bash pip install -U flash-attn --no-build-isolation ``` Also make sure that your hardware is compatible with Flash-Attention 2. Read more about it in the official documentation of the [`flash-attn`](https://github.com/Dao-AILab/flash-attention) repository. Note: you must load your model in half-precision (e.g. `torch.bfloat16`). Now, to run the model with Flash Attention 2, refer to the snippet below: ```python >>> import torch
390_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#combining-stablelm-and-flash-attention-2
.md
Now, to run the model with Flash Attention 2, refer to the snippet below: ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> device = "cuda" # the device to load the model onto
390_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#combining-stablelm-and-flash-attention-2
.md
>>> set_seed(0) >>> tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t") >>> model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-3b-4e1t", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2") # doctest: +SKIP >>> model.to(device) # doctest: +SKIP >>> model_inputs = tokenizer("The weather is always wonderful in", return_tensors="pt").to(model.device)
390_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#combining-stablelm-and-flash-attention-2
.md
>>> model_inputs = tokenizer("The weather is always wonderful in", return_tensors="pt").to(model.device) >>> generated_ids = model.generate(**model_inputs, max_length=32, do_sample=True) # doctest: +SKIP >>> responses = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) # doctest: +SKIP >>> responses # doctest: +SKIP ['The weather is always wonderful in Costa Rica, which makes it a prime destination for retirees. That’s where the Pensionado program comes in, offering'] ```
390_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
This is the configuration class to store the configuration of a [`~StableLmModel`]. It is used to instantiate an StableLM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the StableLM [stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used
390_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50304): Vocabulary size of the StableLM model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`StableLmModel`]. intermediate_size (`int`, *optional*, defaults to 6912): Dimension of the MLP representations.
390_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
intermediate_size (`int`, *optional*, defaults to 6912): Dimension of the MLP representations. hidden_size (`int`, *optional*, defaults to 2560): Number of hidden layers in the Transformer decoder. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer encoder. num_key_value_heads (`int`, *optional*, defaults to 32):
390_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
num_key_value_heads (`int`, *optional*, defaults to 32): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
390_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string). max_position_embeddings (`int`, *optional*, defaults to 4096):
390_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
The non-linear activation function (function or string). max_position_embeddings (`int`, *optional*, defaults to 4096): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05):
390_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. rope_theta (`float`, *optional*, defaults to `10000.0`):
390_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
Whether the model's input and output word embeddings should be tied. rope_theta (`float`, *optional*, defaults to `10000.0`): The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value accordingly. Expected contents: `rope_type` (`str`):
390_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
accordingly. Expected contents: `rope_type` (`str`): The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope', 'llama3'], with 'default' being the original RoPE implementation. `factor` (`float`, *optional*): Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In most scaling types, a `factor` of x will enable the model to handle sequences of length x * original maximum pre-trained length.
390_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
original maximum pre-trained length. `original_max_position_embeddings` (`int`, *optional*): Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining. `attention_factor` (`float`, *optional*): Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention computation. If unspecified, it defaults to value recommended by the implementation, using the `factor` field to infer the suggested value. `beta_fast` (`float`, *optional*):
390_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
`factor` field to infer the suggested value. `beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear ramp function. If unspecified, it defaults to 32. `beta_slow` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*):
390_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (<
390_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
`long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `low_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE `high_freq_factor` (`float`, *optional*):
390_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
`high_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE use_qkv_bias (`bool`, *optional*, defaults to `False`): Whether or not the model should use bias for qkv layers. qk_layernorm (`bool`, *optional*, defaults to `False`): Whether or not to normalize, per head, the Queries and Keys after projecting the hidden states. use_parallel_residual (`bool`, *optional*, defaults to `False`):
390_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
use_parallel_residual (`bool`, *optional*, defaults to `False`): Whether to use a "parallel" formulation in each Transformer layer, which can provide a slight training speedup at large scales. hidden_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio after applying the MLP to the hidden states. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. partial_rotary_factor (`float`, *optional*, defaults to 0.25):
390_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
The dropout ratio for the attention probabilities. partial_rotary_factor (`float`, *optional*, defaults to 0.25): Percentage of the query and keys which will have rotary embedding. bos_token_id (int, *optional*, defaults to 0): The id of the `BOS` token in the vocabulary. eos_token_id (int, *optional*, defaults to 0): The id of the `EOS` token in the vocabulary. Example: ```python >>> from transformers import StableLmModel, StableLmConfig
390_5_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmconfig
.md
>>> # Initializing a StableLM stablelm-3b style configuration >>> configuration = StableLmConfig() ```
390_5_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmmodel
.md
The bare StableLm Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
390_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`StableLmConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
390_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`StableLmDecoderLayer`] Args: config: StableLmConfig Methods: forward
390_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmforcausallm
.md
No docstring available for StableLmForCausalLM Methods: forward
390_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmforsequenceclassification
.md
The StableLm transformer with a sequence classification head on top (linear layer). [`StableLmForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
390_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmforsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
390_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmforsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
390_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmforsequenceclassification
.md
and behavior. Parameters: config ([`StableLmConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
390_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmfortokenclassification
.md
The StableLm Model transformer with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
390_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmfortokenclassification
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`StableLmConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
390_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/stablelm.md
https://huggingface.co/docs/transformers/en/model_doc/stablelm/#stablelmfortokenclassification
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
390_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
391_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
391_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/#overview
.md
The DiffLlama model was proposed in [Differential Transformer](https://arxiv.org/abs/2410.05258) by Kazuma Matsumoto and . This model is combine Llama model and Differential Transformer's Attention. The abstract from the paper is the following:
391_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/#overview
.md
*Transformer tends to overallocate attention to irrelevant context. In this work, we introduce Diff Transformer, which amplifies attention to the relevant context while canceling noise. Specifically, the differential attention mechanism calculates attention scores as the difference between two separate softmax attention maps. The subtraction cancels noise, promoting the emergence of sparse attention patterns. Experimental results on language modeling show that Diff Transformer outperforms Transformer in
391_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/#overview
.md
of sparse attention patterns. Experimental results on language modeling show that Diff Transformer outperforms Transformer in various settings of scaling up model size and training tokens. More intriguingly, it offers notable advantages in practical applications, such as long-context modeling, key information retrieval, hallucination mitigation, in-context learning, and reduction of activation outliers. By being less distracted by irrelevant context, Diff Transformer can mitigate hallucination in question
391_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/#overview
.md
of activation outliers. By being less distracted by irrelevant context, Diff Transformer can mitigate hallucination in question answering and text summarization. For in-context learning, Diff Transformer not only enhances accuracy but is also more robust to order permutation, which was considered as a chronic robustness issue. The results position Diff Transformer as a highly effective and promising architecture to advance large language models.*
391_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/#usage-tips
.md
The hyperparameters of this model is the same as Llama model.
391_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig
.md
This is the configuration class to store the configuration of a [`DiffLlamaModel`]. It is used to instantiate an DiffLlama model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [kajuma/DiffLlama-0.3B-handcut](https://huggingface.co/kajuma/DiffLlama-0.3B-handcut). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
391_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the DiffLlama model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`DiffLlamaModel`] hidden_size (`int`, *optional*, defaults to 2048): Dimension of the hidden representations.
391_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig
.md
hidden_size (`int`, *optional*, defaults to 2048): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 8192): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 16): Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer decoder. num_key_value_heads (`int`, *optional*):
391_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig
.md
Number of attention heads for each attention layer in the Transformer decoder. num_key_value_heads (`int`, *optional*): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
391_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md
https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig
.md
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
391_3_4