source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitforimageclassification
.md
ViT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. <Tip> Note that it's possible to fine-tune ViT on higher resolution images than the ones it has been trained on, by setting `interpolate_pos_encoding` to `True` in the forward of the model. This will interpolate the pre-trained position embeddings to the higher resolution. </Tip>
140_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitforimageclassification
.md
position embeddings to the higher resolution. </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
140_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitforimageclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
140_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#tfvitmodel
.md
No docstring available for TFViTModel Methods: call
140_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#tfvitforimageclassification
.md
No docstring available for TFViTForImageClassification Methods: call </tf> <jax>
140_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#flaxvitmodel
.md
No docstring available for FlaxViTModel Methods: __call__
140_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#flaxvitforimageclassification
.md
No docstring available for FlaxViTForImageClassification Methods: __call__ </jax> </frameworkcontent>
140_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
141_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
141_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#overview
.md
Jamba is a state-of-the-art, hybrid SSM-Transformer LLM. It is the first production-scale Mamba implementation, which opens up interesting research and application opportunities. While this initial experimentation shows encouraging gains, we expect these to be further enhanced with future optimizations and explorations. For full details of this model please read the [release blog post](https://www.ai21.com/blog/announcing-jamba).
141_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#model-details
.md
Jamba is a pretrained, mixture-of-experts (MoE) generative text model, with 12B active parameters and an overall of 52B parameters across all experts. It supports a 256K context length, and can fit up to 140K tokens on a single 80GB GPU.
141_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#model-details
.md
As depicted in the diagram below, Jamba's architecture features a blocks-and-layers approach that allows Jamba to successfully integrate Transformer and Mamba architectures altogether. Each Jamba block contains either an attention or a Mamba layer, followed by a multi-layer perceptron (MLP), producing an overall ratio of one Transformer layer out of every eight total layers.
141_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#model-details
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/jamba_architecture.png" alt="drawing" width="600"/>
141_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#prerequisites
.md
Jamba requires you use `transformers` version 4.39.0 or higher: ```bash pip install transformers>=4.39.0 ``` In order to run optimized Mamba implementations, you first need to install `mamba-ssm` and `causal-conv1d`: ```bash pip install mamba-ssm causal-conv1d>=1.2.0 ``` You also have to have the model on a CUDA device.
141_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#prerequisites
.md
```bash pip install mamba-ssm causal-conv1d>=1.2.0 ``` You also have to have the model on a CUDA device. You can run the model not using the optimized Mamba kernels, but it is **not** recommended as it will result in significantly lower latencies. In order to do that, you'll need to specify `use_mamba_kernels=False` when loading the model.
141_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#run-the-model
.md
```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1") tokenizer = AutoTokenizer.from_pretrained("ai21labs/Jamba-v0.1") input_ids = tokenizer("In the recent Super Bowl LVIII,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216)
141_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#run-the-model
.md
print(tokenizer.batch_decode(outputs))
141_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#run-the-model
.md
# ["<|startoftext|>In the recent Super Bowl LVIII, the Kansas City Chiefs emerged victorious, defeating the San Francisco 49ers in a thrilling overtime showdown. The game was a nail-biter, with both teams showcasing their skills and determination.\n\nThe Chiefs, led by their star quarterback Patrick Mahomes, displayed their offensive prowess, while the 49ers, led by their strong defense, put up a tough fight. The game went into overtime, with the Chiefs ultimately securing the win with a touchdown.\n\nThe
141_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#run-the-model
.md
put up a tough fight. The game went into overtime, with the Chiefs ultimately securing the win with a touchdown.\n\nThe victory marked the Chiefs' second Super Bowl win in four years, solidifying their status as one of the top teams in the NFL. The game was a testament to the skill and talent of both teams, and a thrilling end to the NFL season.\n\nThe Super Bowl is not just about the game itself, but also about the halftime show and the commercials. This year's halftime show featured a star-studded
141_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#run-the-model
.md
about the game itself, but also about the halftime show and the commercials. This year's halftime show featured a star-studded lineup, including Usher, Alicia Keys, and Lil Jon. The show was a spectacle of music and dance, with the performers delivering an energetic and entertaining performance.\n"]
141_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#run-the-model
.md
``` <details> <summary><strong>Loading the model in half precision</strong></summary> The published checkpoint is saved in BF16. In order to load it into RAM in BF16/FP16, you need to specify `torch_dtype`: ```python from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1", torch_dtype=torch.bfloat16) # you can also use torch_dtype=torch.float16 ```
141_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#run-the-model
.md
# you can also use torch_dtype=torch.float16 ``` When using half precision, you can enable the [FlashAttention2](https://github.com/Dao-AILab/flash-attention) implementation of the Attention blocks. In order to use it, you also need the model on a CUDA device. Since in this precision the model is to big to fit on a single 80GB GPU, you'll also need to parallelize it using [accelerate](https://huggingface.co/docs/accelerate/index): ```python from transformers import AutoModelForCausalLM import torch
141_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#run-the-model
.md
```python from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map="auto") ``` </details> <details><summary><strong>Load the model in 8-bit</strong></summary>
141_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#run-the-model
.md
device_map="auto") ``` </details> <details><summary><strong>Load the model in 8-bit</strong></summary> **Using 8-bit precision, it is possible to fit up to 140K sequence lengths on a single 80GB GPU.** You can easily quantize the model to 8-bit using [bitsandbytes](https://huggingface.co/docs/bitsandbytes/index). In order to not degrade model quality, we recommend to exclude the Mamba blocks from the quantization: ```python from transformers import AutoModelForCausalLM, BitsAndBytesConfig
141_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#run-the-model
.md
```python from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True, llm_int8_skip_modules=["mamba"]) model = AutoModelForCausalLM.from_pretrained( "ai21labs/Jamba-v0.1", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", quantization_config=quantization_config ) ``` </details>
141_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
This is the configuration class to store the configuration of a [`JambaModel`]. It is used to instantiate a Jamba model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Jamba-v0.1 model. [ai21labs/Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1) Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
141_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 65536): Vocabulary size of the Jamba model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`JambaModel`] tie_word_embeddings (`bool`, *optional*, defaults to `False`):
141_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
`inputs_ids` passed when calling [`JambaModel`] tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the model has a output word embedding layer. hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 14336): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 32):
141_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer encoder. num_key_value_heads (`int`, *optional*, defaults to 8): This is the number of key_value heads that should be used to implement Grouped Query Attention. If
141_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this
141_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
141_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. num_logits_to_keep (`int` or `None`, *optional*, defaults to 1):
141_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
relevant if `config.is_decoder=True`. num_logits_to_keep (`int` or `None`, *optional*, defaults to 1): Number of prompt logits to calculate during generation. If `None`, all logits will be calculated. If an integer value, only last `num_logits_to_keep` logits will be calculated. Default is 1 because only the logits of the last prompt token are needed for generation. For long sequences, the logits for the entire sequence may use a lot of memory so, setting `num_logits_to_keep=1` will reduce memory footprint
141_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
sequence may use a lot of memory so, setting `num_logits_to_keep=1` will reduce memory footprint significantly. output_router_logits (`bool`, *optional*, defaults to `False`): Whether or not the router logits should be returned by the model. Enabling this will also allow the model to output the auxiliary loss. See [here]() for more details router_aux_loss_coef (`float`, *optional*, defaults to 0.001): The aux loss factor for the total loss. pad_token_id (`int`, *optional*, defaults to 0):
141_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
The aux loss factor for the total loss. pad_token_id (`int`, *optional*, defaults to 0): The id of the padding token. bos_token_id (`int`, *optional*, defaults to 1): The id of the "beginning-of-sequence" token. eos_token_id (`int`, *optional*, defaults to 2): The id of the "end-of-sequence" token. sliding_window (`int`, *optional*): Sliding window attention window size. If not specified, will default to `None`. max_position_embeddings (`int`, *optional*, defaults to 262144):
141_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
max_position_embeddings (`int`, *optional*, defaults to 262144): This value doesn't have any real effect. The maximum sequence length that this model is intended to be used with. It can be used with longer sequences, but performance may degrade. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. num_experts_per_tok (`int`, *optional*, defaults to 2): The number of experts to root per-token, can be also interpreted as the `top-p` routing parameter
141_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
The number of experts to root per-token, can be also interpreted as the `top-p` routing parameter num_experts (`int`, *optional*, defaults to 16): Number of experts per Sparse MLP layer. expert_layer_period (`int`, *optional*, defaults to 2): Once in this many layers, we will have an expert layer expert_layer_offset (`int`, *optional*, defaults to 1): The first layer index that contains an expert mlp layer attn_layer_period (`int`, *optional*, defaults to 8):
141_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
The first layer index that contains an expert mlp layer attn_layer_period (`int`, *optional*, defaults to 8): Once in this many layers, we will have a vanilla attention layer attn_layer_offset (`int`, *optional*, defaults to 4): The first layer index that contains a vanilla attention mlp layer use_mamba_kernels (`bool`, *optional*, defaults to `True`): Flag indicating whether or not to use the fast mamba kernels. These are available only if `mamba-ssm` and
141_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
Flag indicating whether or not to use the fast mamba kernels. These are available only if `mamba-ssm` and `causal-conv1d` are installed, and the mamba modules are running on a CUDA device. Raises ValueError if `True` and kernels are not available mamba_d_state (`int`, *optional*, defaults to 16): The dimension the mamba state space latents mamba_d_conv (`int`, *optional*, defaults to 4): The size of the mamba convolution kernel mamba_expand (`int`, *optional*, defaults to 2):
141_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
The size of the mamba convolution kernel mamba_expand (`int`, *optional*, defaults to 2): Expanding factor (relative to hidden_size) used to determine the mamba intermediate size mamba_dt_rank (`Union[int,str]`, *optional*, defaults to `"auto"`): Rank of the mamba discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)` mamba_conv_bias (`bool`, *optional*, defaults to `True`):
141_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaconfig
.md
mamba_conv_bias (`bool`, *optional*, defaults to `True`): Flag indicating whether or not to use bias in the convolution layer of the mamba mixer block. mamba_proj_bias (`bool`, *optional*, defaults to `False`): Flag indicating whether or not to use bias in the input and output projections (["in_proj", "out_proj"]) of the mamba mixer block
141_5_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambamodel
.md
The bare Jamba Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
141_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambamodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`JambaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
141_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambamodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`JambaDecoderLayer`] Args: config: JambaConfig Methods: forward
141_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaforcausallm
.md
No docstring available for JambaForCausalLM Methods: forward
141_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaforsequenceclassification
.md
The Jamba Model with a sequence classification head on top (linear layer). [`JambaForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
141_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaforsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
141_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaforsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
141_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jamba.md
https://huggingface.co/docs/transformers/en/model_doc/jamba/#jambaforsequenceclassification
.md
and behavior. Parameters: config ([`JambaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
141_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
142_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
142_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#overview
.md
The LLaMA model was proposed in [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. It is a collection of foundation language models ranging from 7B to 65B parameters. The abstract from the paper is the following:
142_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#overview
.md
*We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research
142_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#overview
.md
and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community. *
142_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#overview
.md
This model was contributed by [zphang](https://huggingface.co/zphang) with contributions from [BlackSamorez](https://huggingface.co/BlackSamorez). The code of the implementation in Hugging Face is based on GPT-NeoX [here](https://github.com/EleutherAI/gpt-neox). The original code of the authors can be found [here](https://github.com/facebookresearch/llama).
142_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#usage-tips
.md
- Weights for the LLaMA models can be obtained from by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) - After downloading the weights, they will need to be converted to the Hugging Face Transformers format using the [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). The script can be called with the following (example) command:
142_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#usage-tips
.md
```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` - After conversion, the model and tokenizer can be loaded via: ```python from transformers import LlamaForCausalLM, LlamaTokenizer
142_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#usage-tips
.md
tokenizer = LlamaTokenizer.from_pretrained("/output/path") model = LlamaForCausalLM.from_pretrained("/output/path") ``` Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 65B model, it's thus 130GB of RAM needed.
142_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#usage-tips
.md
- The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string.
142_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#usage-tips
.md
This model was contributed by [zphang](https://huggingface.co/zphang) with contributions from [BlackSamorez](https://huggingface.co/BlackSamorez). The code of the implementation in Hugging Face is based on GPT-NeoX [here](https://github.com/EleutherAI/gpt-neox). The original code of the authors can be found [here](https://github.com/facebookresearch/llama). The Flax version of the implementation was contributed by [afmck](https://huggingface.co/afmck) with the code in the implementation based on Hugging
142_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#usage-tips
.md
implementation was contributed by [afmck](https://huggingface.co/afmck) with the code in the implementation based on Hugging Face's Flax GPT-Neo.
142_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#usage-tips
.md
Based on the original LLaMA model, Meta AI has released some follow-up works: - **Llama2**: Llama2 is an improved version of Llama with some architectural tweaks (Grouped Query Attention), and is pre-trained on 2Trillion tokens. Refer to the documentation of Llama2 which can be found [here](llama2).
142_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LLaMA. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/>
142_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#resources
.md
<PipelineTag pipeline="text-classification"/> - A [notebook](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb#scrollTo=f04ba4d2) on how to use prompt tuning to adapt the LLaMA model for text classification task. 🌎 <PipelineTag pipeline="question-answering"/>
142_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#resources
.md
<PipelineTag pipeline="question-answering"/> - [StackLLaMA: A hands-on guide to train LLaMA with RLHF](https://huggingface.co/blog/stackllama#stackllama-a-hands-on-guide-to-train-llama-with-rlhf), a blog post about how to train LLaMA to answer questions on [Stack Exchange](https://stackexchange.com/) with RLHF. βš—οΈ Optimization
142_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#resources
.md
βš—οΈ Optimization - A [notebook](https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing) on how to fine-tune LLaMA model using xturing library on GPU which has limited memory. 🌎 ⚑️ Inference - A [notebook](https://colab.research.google.com/github/DominguesM/alpaca-lora-ptbr-7b/blob/main/notebooks/02%20-%20Evaluate.ipynb) on how to run the LLaMA Model using PeftModel from the πŸ€— PEFT library. 🌎
142_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#resources
.md
- A [notebook](https://colab.research.google.com/drive/1l2GiSSPbajVyp2Nk3CFT4t3uH6-5TiBe?usp=sharing) on how to load a PEFT adapter LLaMA model with LangChain. 🌎 πŸš€ Deploy - A [notebook](https://colab.research.google.com/github/lxe/simple-llama-finetuner/blob/master/Simple_LLaMA_FineTuner.ipynb#scrollTo=3PM_DilAZD8T) on how to fine-tune LLaMA model using LoRA method via the πŸ€— PEFT library with intuitive UI. 🌎
142_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#resources
.md
- A [notebook](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart-foundation-models/text-generation-open-llama.ipynb) on how to deploy Open-LLaMA model for text generation on Amazon SageMaker. 🌎
142_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
This is the configuration class to store the configuration of a [`LlamaModel`]. It is used to instantiate an LLaMA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the LLaMA-7B. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args:
142_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the LLaMA model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`LlamaModel`] hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 11008): Dimension of the MLP representations.
142_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
intermediate_size (`int`, *optional*, defaults to 11008): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer decoder. num_key_value_heads (`int`, *optional*): This is the number of key_value heads that should be used to implement Grouped Query Attention. If
142_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this
142_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 2048): The maximum sequence length that this model might ever be used with. Llama 1 supports up to 2048 tokens,
142_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
The maximum sequence length that this model might ever be used with. Llama 1 supports up to 2048 tokens, Llama 2 up to 4096, CodeLlama up to 16384. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`):
142_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*): Padding token id. bos_token_id (`int`, *optional*, defaults to 1): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 2): End of stream token id. pretraining_tp (`int`, *optional*, defaults to 1):
142_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
eos_token_id (`int`, *optional*, defaults to 2): End of stream token id. pretraining_tp (`int`, *optional*, defaults to 1): Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this document](https://huggingface.co/docs/transformers/main/perf_train_gpu_many#tensor-parallelism) to understand more about it. This value is necessary to ensure exact reproducibility of the pretraining results. Please refer to [this issue](https://github.com/pytorch/pytorch/issues/76232).
142_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
results. Please refer to [this issue](https://github.com/pytorch/pytorch/issues/76232). tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
142_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value accordingly. Expected contents: `rope_type` (`str`): The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope', 'llama3'], with 'default' being the original RoPE implementation. `factor` (`float`, *optional*):
142_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
'llama3'], with 'default' being the original RoPE implementation. `factor` (`float`, *optional*): Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In most scaling types, a `factor` of x will enable the model to handle sequences of length x * original maximum pre-trained length. `original_max_position_embeddings` (`int`, *optional*): Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining.
142_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining. `attention_factor` (`float`, *optional*): Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention computation. If unspecified, it defaults to value recommended by the implementation, using the `factor` field to infer the suggested value. `beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
142_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
`beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear ramp function. If unspecified, it defaults to 32. `beta_slow` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (<
142_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
`short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
142_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `low_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE `high_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE attention_bias (`bool`, *optional*, defaults to `False`):
142_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
attention_bias (`bool`, *optional*, defaults to `False`): Whether to use a bias in the query, key, value and output projection layers during self-attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. mlp_bias (`bool`, *optional*, defaults to `False`): Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers. head_dim (`int`, *optional*):
142_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers. head_dim (`int`, *optional*): The attention head dimension. If None, it will default to hidden_size // num_attention_heads ```python >>> from transformers import LlamaModel, LlamaConfig
142_4_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaconfig
.md
>>> # Initializing a LLaMA llama-7b style configuration >>> configuration = LlamaConfig() >>> # Initializing a model from the llama-7b style configuration >>> model = LlamaModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
142_4_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizer
.md
Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is no padding token in the original model. Args: vocab_file (`str`): Path to the vocabulary file. unk_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<s>"`):
142_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizer
.md
token instead. bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. eos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"</s>"`): The end of sequence token. pad_token (`str` or `tokenizers.AddedToken`, *optional*): A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
142_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizer
.md
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. sp_model_kwargs (`Dict[str, Any]`, `Optional`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization.
142_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizer
.md
to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
142_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizer
.md
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. add_bos_token (`bool`, *optional*, defaults to `True`): Whether or not to add an `bos_token` at the start of sequences. add_eos_token (`bool`, *optional*, defaults to `False`): Whether or not to add an `eos_token` at the end of sequences. clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
142_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizer
.md
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`): Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces. use_default_system_prompt (`bool`, *optional*, defaults to `False`): Whether or not the default system prompt for Llama should be used. spaces_between_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not to add spaces between special tokens. legacy (`bool`, *optional*):
142_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizer
.md
Whether or not to add spaces between special tokens. legacy (`bool`, *optional*): Whether or not the `legacy` behavior of the tokenizer should be used. Legacy is before the merge of #24622 and #25224 which includes fixes to properly handle tokens that appear after special tokens. Make sure to also set `from_slow` to `True`. A simple example: - `legacy=True`: ```python >>> from transformers import LlamaTokenizerFast
142_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizer
.md
>>> tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", legacy=True, from_slow=True) >>> tokenizer.encode("Hello <s>.") # 869 is '▁.' [1, 15043, 29871, 1, 869] ``` - `legacy=False`: ```python >>> from transformers import LlamaTokenizerFast
142_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizer
.md
>>> tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", legacy=False, from_slow=True) >>> tokenizer.encode("Hello <s>.") # 29889 is '.' [1, 15043, 29871, 1, 29889] ``` Checkout the [pull request](https://github.com/huggingface/transformers/pull/24565) for more details. add_prefix_space (`bool`, *optional*, defaults to `True`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any
142_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizer
.md
Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. Again, this should be set with `from_slow=True` to make sure it's taken into account. Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
142_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast
.md
Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding. This uses notably ByteFallback and no normalization. ```python >>> from transformers import LlamaTokenizerFast
142_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast
.md
>>> tokenizer = LlamaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer") >>> tokenizer.encode("Hello this is a test") [1, 15043, 445, 338, 263, 1243] ``` If you want to change the `bos_token` or the `eos_token`, make sure to specify them when initializing the model, or call `tokenizer.update_post_processor()` to make sure that the post-processing is correctly done (otherwise the
142_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast
.md
call `tokenizer.update_post_processor()` to make sure that the post-processing is correctly done (otherwise the values of the first token and final token of an encoded sequence will not be correct). For more details, checkout [post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation. This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args:
142_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md
https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast
.md
refer to this superclass for more information regarding those methods. Args: vocab_file (`str`, *optional*): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .model extension) that contains the vocabulary necessary to instantiate a tokenizer. tokenizer_file (`str`, *optional*): [tokenizers](https://github.com/huggingface/tokenizers) file (generally has a .json extension) that contains everything needed to load the tokenizer.
142_6_3