source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#overview
.md
The RWKV model was proposed in [this repo](https://github.com/BlinkDL/RWKV-LM) It suggests a tweak in the traditional Transformer attention to make it linear. This way, the model can be used as recurrent network: passing inputs for timestamp 0 and timestamp 1 together is the same as passing inputs at timestamp 0, then inputs at timestamp 1 along with the state of timestamp 0 (see example below).
320_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#overview
.md
This can be more efficient than a regular Transformer and can deal with sentence of any length (even if the model uses a fixed context length for training). This model was contributed by [sgugger](https://huggingface.co/sgugger). The original code can be found [here](https://github.com/BlinkDL/RWKV-LM).
320_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#usage-example
.md
```py import torch from transformers import AutoTokenizer, RwkvConfig, RwkvModel model = RwkvModel.from_pretrained("sgugger/rwkv-430M-pile") tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-430M-pile") inputs = tokenizer("This is an example.", return_tensors="pt") # Feed everything to the model outputs = model(inputs["input_ids"]) output_whole = outputs.last_hidden_state outputs = model(inputs["input_ids"][:, :2]) output_one = outputs.last_hidden_state
320_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#usage-example
.md
outputs = model(inputs["input_ids"][:, :2]) output_one = outputs.last_hidden_state # Using the state computed on the first inputs, we will get the same output outputs = model(inputs["input_ids"][:, 2:], state=outputs.state) output_two = outputs.last_hidden_state
320_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#usage-example
.md
torch.allclose(torch.cat([output_one, output_two], dim=1), output_whole, atol=1e-5) ``` If you want to make sure the model stops generating when `'\n\n'` is detected, we recommend using the following stopping criteria: ```python from transformers import StoppingCriteria class RwkvStoppingCriteria(StoppingCriteria): def __init__(self, eos_sequence = [187,187], eos_token_id = 537): self.eos_sequence = eos_sequence self.eos_token_id = eos_token_id
320_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#usage-example
.md
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: last_2_ids = input_ids[:,-2:].tolist() return self.eos_sequence in last_2_ids output = model.generate(inputs["input_ids"], max_new_tokens=64, stopping_criteria = [RwkvStoppingCriteria()]) ```
320_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvconfig
.md
This is the configuration class to store the configuration of a [`RwkvModel`]. It is used to instantiate a RWKV model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RWVK-4 [RWKV/rwkv-4-169m-pile](https://huggingface.co/RWKV/rwkv-4-169m-pile) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
320_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50277): Vocabulary size of the RWKV model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`RwkvModel`]. context_length (`int`, *optional*, defaults to 1024):
320_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvconfig
.md
`inputs_ids` passed when calling [`RwkvModel`]. context_length (`int`, *optional*, defaults to 1024): The maximum sequence length that this model can be used with in a single forward (using it in RNN mode lets use any sequence length). hidden_size (`int`, *optional*, defaults to 4096): Dimensionality of the embeddings and hidden states. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the model. attention_hidden_size (`int`, *optional*):
320_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvconfig
.md
Number of hidden layers in the model. attention_hidden_size (`int`, *optional*): Dimensionality of the attention hidden states. Will default to `hidden_size` if unset. intermediate_size (`int`, *optional*): Dimensionality of the inner feed-forward layers. Will default to 4 times `hidden_size` if unset. layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): The epsilon to use in the layer normalization layers. bos_token_id (`int`, *optional*, defaults to 0):
320_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvconfig
.md
The epsilon to use in the layer normalization layers. bos_token_id (`int`, *optional*, defaults to 0): The id of the beginning of sentence token in the vocabulary. Defaults to 0 as RWKV uses the same tokenizer as GPTNeoX. eos_token_id (`int`, *optional*, defaults to 0): The id of the end of sentence token in the vocabulary. Defaults to 0 as RWKV uses the same tokenizer as GPTNeoX. rescale_every (`int`, *optional*, defaults to 6):
320_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvconfig
.md
GPTNeoX. rescale_every (`int`, *optional*, defaults to 6): At inference, the hidden states (and weights of the correponding output layers) are divided by 2 every `rescale_every` layer. If set to 0 or a negative number, no rescale is done. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether or not to tie the word embeddings with the input token embeddings. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last state. Example: ```python
320_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvconfig
.md
Whether or not the model should return the last state. Example: ```python >>> from transformers import RwkvConfig, RwkvModel
320_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvconfig
.md
>>> # Initializing a Rwkv configuration >>> configuration = RwkvConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = RwkvModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
320_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvmodel
.md
The bare RWKV Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
320_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RwkvConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
320_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
320_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvlmheadmodel
.md
The RWKV Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
320_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvlmheadmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RwkvConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
320_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvlmheadmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
320_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
.md
In a traditional auto-regressive Transformer, attention is written as $$O = \hbox{softmax}(QK^{T} / \sqrt{d}) V$$
320_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
.md
with \\(Q\\), \\(K\\) and \\(V\\) are matrices of shape `seq_len x hidden_size` named query, key and value (they are actually bigger matrices with a batch dimension and an attention head dimension but we're only interested in the last two, which is where the matrix product is taken, so for the sake of simplicity we only consider those two). The product \\(QK^{T}\\) then has shape `seq_len x seq_len` and we can take the matrix product with \\(V\\) to get the output \\(O\\) of the same shape as the others.
320_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
.md
Replacing the softmax by its value gives: $$O_{i} = \frac{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}} V_{j}}{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}}}$$ Note that the entries in \\(QK^{T}\\) corresponding to \\(j > i\\) are masked (the sum stops at j) because the attention is not allowed to look at future tokens (only past ones). In comparison, the RWKV attention is given by $$O_{i} = \sigma(R_{i}) \frac{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}} V_{j}}{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}}}$$
320_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
.md
$$O_{i} = \sigma(R_{i}) \frac{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}} V_{j}}{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}}}$$ where \\(R\\) is a new matrix called receptance by the author, \\(K\\) and \\(V\\) are still the key and value (\\(\sigma\\) here is the sigmoid function). \\(W\\) is a new vector that represents the position of the token and is given by $$W_{0} = u \hbox{ and } W_{k} = (k-1)w \hbox{ for } k \geq 1$$
320_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
.md
$$W_{0} = u \hbox{ and } W_{k} = (k-1)w \hbox{ for } k \geq 1$$ with \\(u\\) and \\(w\\) learnable parameters called in the code `time_first` and `time_decay` respectively. The numerator and denominator can both be expressed recursively. Naming them \\(N_{i}\\) and \\(D_{i}\\) we have: $$N_{i} = e^{u + K_{i}} V_{i} + \hat{N}_{i} \hbox{ where } \hat{N}_{i} = e^{K_{i-1}} V_{i-1} + e^{w + K_{i-2}} V_{i-2} \cdots + e^{(i-2)w + K_{1}} V_{1}$$
320_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
.md
so \\(\hat{N}_{i}\\) (called `numerator_state` in the code) satisfies $$\hat{N}_{0} = 0 \hbox{ and } \hat{N}_{j+1} = e^{K_{j}} V_{j} + e^{w} \hat{N}_{j}$$ and $$D_{i} = e^{u + K_{i}} + \hat{D}_{i} \hbox{ where } \hat{D}_{i} = e^{K_{i-1}} + e^{w + K_{i-2}} \cdots + e^{(i-2)w + K_{1}}$$ so \\(\hat{D}_{i}\\) (called `denominator_state` in the code) satisfies $$\hat{D}_{0} = 0 \hbox{ and } \hat{D}_{j+1} = e^{K_{j}} + e^{w} \hat{D}_{j}$$
320_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
.md
$$\hat{D}_{0} = 0 \hbox{ and } \hat{D}_{j+1} = e^{K_{j}} + e^{w} \hat{D}_{j}$$ The actual recurrent formula used are a tiny bit more complex, as for numerical stability we don't want to compute exponentials of big numbers. Usually the softmax is not computed as is, but the exponential of the maximum term is divided of the numerator and denominator: $$\frac{e^{x_{i}}}{\sum_{j=1}^{n} e^{x_{j}}} = \frac{e^{x_{i} - M}}{\sum_{j=1}^{n} e^{x_{j} - M}}$$
320_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
.md
$$\frac{e^{x_{i}}}{\sum_{j=1}^{n} e^{x_{j}}} = \frac{e^{x_{i} - M}}{\sum_{j=1}^{n} e^{x_{j} - M}}$$ with \\(M\\) the maximum of all \\(x_{j}\\). So here on top of saving the numerator state (\\(\hat{N}\\)) and the denominator state (\\(\hat{D}\\)) we also keep track of the maximum of all terms encountered in the exponentials. So we actually use $$\tilde{N}_{i} = e^{-M_{i}} \hat{N}_{i} \hbox{ and } \tilde{D}_{i} = e^{-M_{i}} \hat{D}_{i}$$ defined by the following recurrent formulas:
320_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
.md
defined by the following recurrent formulas: $$\tilde{N}_{0} = 0 \hbox{ and } \tilde{N}_{j+1} = e^{K_{j} - q} V_{j} + e^{w + M_{j} - q} \tilde{N}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})$$ and $$\tilde{D}_{0} = 0 \hbox{ and } \tilde{D}_{j+1} = e^{K_{j} - q} + e^{w + M_{j} - q} \tilde{D}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})$$ and \\(M_{j+1} = q\\). With those, we can then compute
320_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
.md
and \\(M_{j+1} = q\\). With those, we can then compute $$N_{i} = e^{u + K_{i} - q} V_{i} + e^{M_{i}} \tilde{N}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})$$ and $$D_{i} = e^{u + K_{i} - q} + e^{M_{i}} \tilde{D}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})$$ which finally gives us $$O_{i} = \sigma(R_{i}) \frac{N_{i}}{D_{i}}$$
320_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
321_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
321_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#overview
.md
DBRX is a [transformer-based](https://www.isattentionallyouneed.com/) decoder-only large language model (LLM) that was trained using next-token prediction. It uses a *fine-grained* mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data.
321_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#overview
.md
It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2. This provides 65x more possible combinations of experts and we found that this improves model quality. DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA).
321_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#overview
.md
DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA). It is a BPE based model and uses the GPT-4 tokenizer as described in the [tiktoken](https://github.com/openai/tiktoken) repository. We made these choices based on exhaustive evaluation and scaling experiments. DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens.
321_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#overview
.md
DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens. We estimate that this data is at least 2x better token-for-token than the data we used to pretrain the MPT family of models. This new dataset was developed using the full suite of Databricks tools, including Apache Spark™ and Databricks notebooks for data processing, and Unity Catalog for data management and governance.
321_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#overview
.md
We used curriculum learning for pretraining, changing the data mix during training in ways we found to substantially improve model quality. More detailed information about DBRX Instruct and DBRX Base can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
321_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#overview
.md
This model was contributed by [eitan-turok](https://huggingface.co/eitanturok) and [abhi-db](https://huggingface.co/abhi-db). The original code can be found [here](https://github.com/databricks/dbrx-instruct), though this may not be up to date.
321_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#usage-examples
.md
The `generate()` method can be used to generate text using DBRX. You can generate using the standard attention implementation, flash-attention, and the PyTorch scaled dot product attention. The last two attention implementations give speed ups. ```python from transformers import DbrxForCausalLM, AutoTokenizer import torch
321_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#usage-examples
.md
tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", token="YOUR_HF_TOKEN") model = DbrxForCausalLM.from_pretrained( "databricks/dbrx-instruct", device_map="auto", torch_dtype=torch.bfloat16, token="YOUR_HF_TOKEN", ) input_text = "What does it take to build a great LLM?" messages = [{"role": "user", "content": input_text}] input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
321_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#usage-examples
.md
outputs = model.generate(**input_ids, max_new_tokens=200) print(tokenizer.decode(outputs[0])) ``` If you have flash-attention installed (`pip install flash-attn`), it is possible to generate faster. (The HuggingFace documentation for flash-attention can be found [here](https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2).) ```python from transformers import DbrxForCausalLM, AutoTokenizer import torch
321_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#usage-examples
.md
tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", token="YOUR_HF_TOKEN") model = DbrxForCausalLM.from_pretrained( "databricks/dbrx-instruct", device_map="auto", torch_dtype=torch.bfloat16, token="YOUR_HF_TOKEN", attn_implementation="flash_attention_2", )
321_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#usage-examples
.md
input_text = "What does it take to build a great LLM?" messages = [{"role": "user", "content": input_text}] input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
321_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#usage-examples
.md
outputs = model.generate(**input_ids, max_new_tokens=200) print(tokenizer.decode(outputs[0])) ``` You can also generate faster using the PyTorch scaled dot product attention. (The HuggingFace documentation for scaled dot product attention can be found [here](https://huggingface.co/docs/transformers/perf_infer_gpu_one#pytorch-scaled-dot-product-attention).) ```python from transformers import DbrxForCausalLM, AutoTokenizer import torch
321_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#usage-examples
.md
tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", token="YOUR_HF_TOKEN") model = DbrxForCausalLM.from_pretrained( "databricks/dbrx-instruct", device_map="auto", torch_dtype=torch.bfloat16, token="YOUR_HF_TOKEN", attn_implementation="sdpa", )
321_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#usage-examples
.md
input_text = "What does it take to build a great LLM?" messages = [{"role": "user", "content": input_text}] input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=200) print(tokenizer.decode(outputs[0])) ```
321_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxconfig
.md
This is the configuration class to store the configuration of a [`DbrxModel`]. It is used to instantiate a Dbrx model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a different configuration to that of the [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
321_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: d_model (`int`, *optional*, defaults to 2048): Dimensionality of the embeddings and hidden states. n_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. n_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder.
321_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxconfig
.md
n_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. max_seq_len (`int`, *optional*, defaults to 2048): The maximum sequence length of the model. vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the Dbrx model. Defines the maximum number of different tokens that can be represented by the `inputs_ids` passed when calling [`DbrxModel`]. resid_pdrop (`float`, *optional*, defaults to 0.0):
321_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxconfig
.md
the `inputs_ids` passed when calling [`DbrxModel`]. resid_pdrop (`float`, *optional*, defaults to 0.0): The dropout probability applied to the attention output before combining with residual. emb_pdrop (`float`, *optional*, defaults to 0.0): The dropout probability for the embedding layer. attn_config (`dict`, *optional*): A dictionary used to configure the model's attention module. ffn_config (`dict`, *optional*): A dictionary used to configure the model's FFN module.
321_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxconfig
.md
ffn_config (`dict`, *optional*): A dictionary used to configure the model's FFN module. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. output_router_logits (`bool`, *optional*, defaults to `False`):
321_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxconfig
.md
output_router_logits (`bool`, *optional*, defaults to `False`): Whether or not the router logits should be returned by the model. Enabling this will also allow the model to output the auxiliary loss. See [here]() for more details. Example: ```python >>> from transformers import DbrxConfig, DbrxModel
321_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxconfig
.md
>>> # Initializing a Dbrx configuration >>> configuration = DbrxConfig(n_layers=2, d_model=256, n_heads=8, vocab_size=128) >>> # Initializing a model (with random weights) from the configuration >>> model = DbrxModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
321_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxmodel
.md
The bare DBRX Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
321_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DbrxConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
321_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers*. Each layer is a [`DbrxBlock`] layer. Args: config ([`DbrxConfig`]): Model configuration class with all parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
321_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
321_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxforcausallm
.md
The DBRX Model transformer for causal language modeling. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
321_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxforcausallm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DbrxConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
321_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxforcausallm
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
321_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
322_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
322_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#overview
.md
The ByT5 model was presented in [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. The abstract from the paper is the following: *Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units.
322_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#overview
.md
*Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by
322_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#overview
.md
can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with
322_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#overview
.md
operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on
322_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#overview
.md
counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be
322_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#overview
.md
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/google-research/byt5). <Tip> ByT5's architecture is based on the T5v1.1 model, refer to [T5v1.1's documentation page](t5v1.1) for the API reference. They only differ in how inputs should be prepared for the model, see the code examples below. </Tip>
322_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#overview
.md
only differ in how inputs should be prepared for the model, see the code examples below. </Tip> Since ByT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
322_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
ByT5 works on raw UTF-8 bytes, so it can be used without a tokenizer: ```python >>> from transformers import T5ForConditionalGeneration >>> import torch >>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") >>> num_special_tokens = 3 >>> # Model has 3 special tokens which take up the input ids 0,1,2 of ByT5. >>> # => Need to shift utf-8 character encodings by 3 before passing ids to model.
322_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens >>> labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokens >>> loss = model(input_ids, labels=labels).loss >>> loss.item() 2.66 ``` For batched inference and training it is however recommended to make use of the tokenizer: ```python >>> from transformers import T5ForConditionalGeneration, AutoTokenizer
322_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-small") >>> model_inputs = tokenizer( ... ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt" ... ) >>> labels_dict = tokenizer( ... ["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt" ... ) >>> labels = labels_dict.input_ids
322_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> loss = model(**model_inputs, labels=labels).loss >>> loss.item() 17.9 ``` Similar to [T5](t5), ByT5 was trained on the span-mask denoising task. However, since the model works directly on characters, the pretraining task is a bit different. Let's corrupt some characters of the input sentence `"The dog chases a ball in the park."` and ask ByT5 to predict them for us. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> import torch
322_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base") >>> input_ids_prompt = "The dog chases a ball in the park." >>> input_ids = tokenizer(input_ids_prompt).input_ids
322_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> # Note that we cannot add "{extra_id_...}" to the string directly >>> # as the Byte tokenizer would incorrectly merge the tokens >>> # For ByT5, we need to work directly on the character level >>> # Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead >>> # uses final utf character ids. >>> # UTF-8 is represented by 8 bits and ByT5 has 3 special tokens. >>> # => There are 2**8+2 = 259 input ids and mask tokens count down from index 258.
322_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> # => There are 2**8+2 = 259 input ids and mask tokens count down from index 258. >>> # => mask to "The dog [258]a ball [257]park."
322_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]]) >>> input_ids tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]])
322_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> # ByT5 produces only one char at a time so we need to produce many more output characters here -> set `max_length=100`. >>> output_ids = model.generate(input_ids, max_length=100)[0].tolist() >>> output_ids
322_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> output_ids [0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49]
322_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> # ^- Note how 258 descends to 257, 256, 255 >>> # Now we need to split on the sentinel tokens, let's write a short loop for this >>> output_ids_list = [] >>> start_token = 0 >>> sentinel_token = 258 >>> while sentinel_token in output_ids: ... split_idx = output_ids.index(sentinel_token) ... output_ids_list.append(output_ids[start_token:split_idx]) ... start_token = split_idx ... sentinel_token -= 1
322_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
.md
>>> output_ids_list.append(output_ids[start_token:]) >>> output_string = tokenizer.batch_decode(output_ids_list) >>> output_string ['<pad>', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.'] ```
322_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#byt5tokenizer
.md
Construct a ByT5 tokenizer. ByT5 simply uses raw bytes utf-8 encoding. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip>
322_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#byt5tokenizer
.md
The token used is the `sep_token`. </Tip> unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. extra_ids (`int`, *optional*, defaults to 125): Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are
322_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#byt5tokenizer
.md
Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are indexed from the end of the vocabulary up to beginning ("<extra_id_0>" is the last token in the vocabulary like in ByT5 preprocessing see [here](https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117)).
322_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#byt5tokenizer
.md
additional_special_tokens (`List[str]`, *optional*): Additional special tokens used by the tokenizer. See [`ByT5Tokenizer`] for all details.
322_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
323_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
323_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#overview
.md
The [Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/) model is a major update to [Qwen-VL](https://arxiv.org/pdf/2308.12966) from the Qwen team at Alibaba Research. The abstract from the blog is the following:
323_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#overview
.md
*This blog introduces Qwen2-VL, an advanced version of the Qwen-VL model that has undergone significant enhancements over the past year. Key improvements include enhanced image comprehension, advanced video understanding, integrated visual agent functionality, and expanded multilingual support. The model architecture has been optimized for handling arbitrary image resolutions through Naive Dynamic Resolution support and utilizes Multimodal Rotary Position Embedding (M-ROPE) to effectively process both 1D
323_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#overview
.md
Naive Dynamic Resolution support and utilizes Multimodal Rotary Position Embedding (M-ROPE) to effectively process both 1D textual and multi-dimensional visual data. This updated model demonstrates competitive performance against leading AI systems like GPT-4o and Claude 3.5 Sonnet in vision-related tasks and ranks highly among open-source models in text capabilities. These advancements make Qwen2-VL a versatile tool for various applications requiring robust multimodal processing and reasoning abilities.*
323_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#overview
.md
make Qwen2-VL a versatile tool for various applications requiring robust multimodal processing and reasoning abilities.*
323_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/qwen2_vl_architecture.jpeg" alt="drawing" width="600"/> <small> Qwen2-VL architecture. Taken from the <a href="https://qwenlm.github.io/blog/qwen2-vl/">blog post.</a> </small> This model was contributed by [simonJJJ](https://huggingface.co/simonJJJ).
323_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
.md
The model can accept both images and videos as input. Here's an example code for inference. ```python from PIL import Image import requests import torch from torchvision import io from typing import Dict from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
323_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
.md
# Load the model in half-precision on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", device_map="auto") processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # Image url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" image = Image.open(requests.get(url, stream=True).raw) conversation = [ { "role":"user", "content":[ { "type":"image", }, { "type":"text", "text":"Describe this image." } ] } ]
323_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
.md
conversation = [ { "role":"user", "content":[ { "type":"image", }, { "type":"text", "text":"Describe this image." } ] } ] # Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
323_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
.md
inputs = processor(text=[text_prompt], images=[image], padding=True, return_tensors="pt") inputs = inputs.to('cuda') # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(output_text)
323_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
.md
# Video def fetch_video(ele: Dict, nframe_factor=2): if isinstance(ele['video'], str): def round_by_factor(number: int, factor: int) -> int: return round(number / factor) * factor video = ele["video"] if video.startswith("file://"): video = video[7:]
323_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
.md
video, _, info = io.read_video( video, start_pts=ele.get("video_start", 0.0), end_pts=ele.get("video_end", None), pts_unit="sec", output_format="TCHW", ) assert not ("fps" in ele and "nframes" in ele), "Only accept either `fps` or `nframes`" if "nframes" in ele: nframes = round_by_factor(ele["nframes"], nframe_factor) else: fps = ele.get("fps", 1.0) nframes = round_by_factor(video.size(0) / info["video_fps"] * fps, nframe_factor) idx = torch.linspace(0, video.size(0) - 1, nframes, dtype=torch.int64)
323_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
.md
idx = torch.linspace(0, video.size(0) - 1, nframes, dtype=torch.int64) return video[idx]
323_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
.md
video_info = {"type": "video", "video": "/path/to/video.mp4", "fps": 1.0} video = fetch_video(video_info) conversation = [ { "role": "user", "content": [ {"type": "video"}, {"type": "text", "text": "What happened in the video?"}, ], } ]
323_2_7