source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#representation-learning | .md | A subfield of machine learning which focuses on learning meaningful representations of raw data. Some examples of representation learning techniques include word embeddings, autoencoders, and Generative Adversarial Networks (GANs). | 16_37_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#sampling-rate | .md | A measurement in hertz of the number of samples (the audio signal) taken per second. The sampling rate is a result of discretizing a continuous signal such as speech. | 16_38_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#self-attention | .md | Each element of the input finds out which other elements of the input they should attend to. | 16_39_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#self-supervised-learning | .md | A category of machine learning techniques in which a model creates its own learning objective from unlabeled data. It differs from [unsupervised learning](#unsupervised-learning) and [supervised learning](#supervised-learning) in that the learning process is supervised, but not explicitly from the user. | 16_40_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#self-supervised-learning | .md | One example of self-supervised learning is [masked language modeling](#masked-language-modeling-mlm), where a model is passed sentences with a proportion of its tokens removed and learns to predict the missing tokens. | 16_40_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#semi-supervised-learning | .md | A broad category of machine learning training techniques that leverages a small amount of labeled data with a larger quantity of unlabeled data to improve the accuracy of a model, unlike [supervised learning](#supervised-learning) and [unsupervised learning](#unsupervised-learning). | 16_41_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#semi-supervised-learning | .md | An example of a semi-supervised learning approach is "self-training", in which a model is trained on labeled data, and then used to make predictions on the unlabeled data. The portion of the unlabeled data that the model predicts with the most confidence gets added to the labeled dataset and used to retrain the model. | 16_41_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#sequence-to-sequence-seq2seq | .md | Models that generate a new sequence from an input, like translation models, or summarization models (such as
[Bart](model_doc/bart) or [T5](model_doc/t5)). | 16_42_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#sharded-ddp | .md | Another name for the foundational [ZeRO](#zero-redundancy-optimizer-zero) concept as used by various other implementations of ZeRO. | 16_43_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#stride | .md | In [convolution](#convolution) or [pooling](#pooling), the stride refers to the distance the kernel is moved over a matrix. A stride of 1 means the kernel is moved one pixel over at a time, and a stride of 2 means the kernel is moved two pixels over at a time. | 16_44_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#supervised-learning | .md | A form of model training that directly uses labeled data to correct and instruct model performance. Data is fed into the model being trained, and its predictions are compared to the known labels. The model updates its weights based on how incorrect its predictions were, and the process is repeated to optimize model performance. | 16_45_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#tensor-parallelism-tp | .md | Parallelism technique for training on multiple GPUs in which each tensor is split up into multiple chunks, so instead of
having the whole tensor reside on a single GPU, each shard of the tensor resides on its designated GPU. Shards gets
processed separately and in parallel on different GPUs and the results are synced at the end of the processing step.
This is what is sometimes called horizontal parallelism, as the splitting happens on horizontal level. | 16_46_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#tensor-parallelism-tp | .md | This is what is sometimes called horizontal parallelism, as the splitting happens on horizontal level.
Learn more about Tensor Parallelism [here](perf_train_gpu_many#tensor-parallelism). | 16_46_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#token | .md | A part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords) or a
punctuation symbol. | 16_47_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#token-type-ids | .md | Some models' purpose is to do classification on pairs of sentences or question answering.
<Youtube id="0u3ioSwev3s"/>
These require two different sequences to be joined in a single "input_ids" entry, which usually is performed with the
help of special tokens, such as the classifier (`[CLS]`) and separator (`[SEP]`) tokens. For example, the BERT model
builds its two sequence input as such:
```python
>>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
``` | 16_48_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#token-type-ids | .md | builds its two sequence input as such:
```python
>>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
```
We can use our tokenizer to automatically generate such a sentence by passing the two sequences to `tokenizer` as two
arguments (and not a list, like before) like this:
```python
>>> from transformers import BertTokenizer | 16_48_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#token-type-ids | .md | >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased")
>>> sequence_a = "HuggingFace is based in NYC"
>>> sequence_b = "Where is HuggingFace based?" | 16_48_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#token-type-ids | .md | >>> encoded_dict = tokenizer(sequence_a, sequence_b)
>>> decoded = tokenizer.decode(encoded_dict["input_ids"])
```
which will return:
```python
>>> print(decoded)
[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
```
This is enough for some models to understand where one sequence ends and where another begins. However, other models,
such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying | 16_48_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#token-type-ids | .md | such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying
the two types of sequence in the model.
The tokenizer returns this mask as the "token_type_ids" entry:
```python
>>> encoded_dict["token_type_ids"]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```
The first sequence, the "context" used for the question, has all its tokens represented by a `0`, whereas the second | 16_48_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#token-type-ids | .md | ```
The first sequence, the "context" used for the question, has all its tokens represented by a `0`, whereas the second
sequence, corresponding to the "question", has all its tokens represented by a `1`.
Some models, like [`XLNetModel`] use an additional token represented by a `2`. | 16_48_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#transfer-learning | .md | A technique that involves taking a pretrained model and adapting it to a dataset specific to your task. Instead of training a model from scratch, you can leverage knowledge obtained from an existing model as a starting point. This speeds up the learning process and reduces the amount of training data needed. | 16_49_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#transformer | .md | Self-attention based deep learning model architecture. | 16_50_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#unsupervised-learning | .md | A form of model training in which data provided to the model is not labeled. Unsupervised learning techniques leverage statistical information of the data distribution to find patterns useful for the task at hand. | 16_51_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md | https://huggingface.co/docs/transformers/en/glossary/#zero-redundancy-optimizer-zero | .md | Parallelism technique which performs sharding of the tensors somewhat similar to [TensorParallel](#tensor-parallelism-tp),
except the whole tensor gets reconstructed in time for a forward or backward computation, therefore the model doesn't need
to be modified. This method also supports various offloading techniques to compensate for limited GPU memory.
Learn more about ZeRO [here](perf_train_gpu_many#zero-data-parallelism). | 16_52_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 17_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 17_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#xla-integration-for-tensorflow-models | .md | [[open-in-colab]]
Accelerated Linear Algebra, dubbed XLA, is a compiler for accelerating the runtime of TensorFlow Models. From the [official documentation](https://www.tensorflow.org/xla):
XLA (Accelerated Linear Algebra) is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes. | 17_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#xla-integration-for-tensorflow-models | .md | Using XLA in TensorFlow is simple – it comes packaged inside the `tensorflow` library, and it can be triggered with the `jit_compile` argument in any graph-creating function such as [`tf.function`](https://www.tensorflow.org/guide/intro_to_graphs). When using Keras methods like `fit()` and `predict()`, you can enable XLA simply by passing the `jit_compile` argument to `model.compile()`. However, XLA is not limited to these methods - it can also be used to accelerate any arbitrary `tf.function`. | 17_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#xla-integration-for-tensorflow-models | .md | Several TensorFlow methods in 🤗 Transformers have been rewritten to be XLA-compatible, including text generation for models such as [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2), [T5](https://huggingface.co/docs/transformers/model_doc/t5) and [OPT](https://huggingface.co/docs/transformers/model_doc/opt), as well as speech processing for models such as [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper). | 17_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#xla-integration-for-tensorflow-models | .md | While the exact amount of speed-up is very much model-dependent, for TensorFlow text generation models inside 🤗 Transformers, we noticed a speed-up of ~100x. This document will explain how you can use XLA for these models to get the maximum amount of performance. We’ll also provide links to additional resources if you’re interested to learn more about the benchmarks and our design philosophy behind the XLA integration. | 17_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#running-tf-functions-with-xla | .md | Let us consider the following model in TensorFlow:
```py
import tensorflow as tf
model = tf.keras.Sequential(
[tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")]
)
```
The above model accepts inputs having a dimension of `(10, )`. We can use the model for running a forward pass like so:
```py
# Generate random inputs for the model.
batch_size = 16
input_vector_dim = 10
random_inputs = tf.random.normal((batch_size, input_vector_dim)) | 17_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#running-tf-functions-with-xla | .md | # Run a forward pass.
_ = model(random_inputs)
```
In order to run the forward pass with an XLA-compiled function, we’d need to do:
```py
xla_fn = tf.function(model, jit_compile=True)
_ = xla_fn(random_inputs)
```
The default `call()` function of the `model` is used for compiling the XLA graph. But if there’s any other model function you want to compile into XLA that’s also possible with:
```py
my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True)
``` | 17_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#running-a-tf-text-generation-model-with-xla-from--transformers | .md | To enable XLA-accelerated generation within 🤗 Transformers, you need to have a recent version of `transformers` installed. You can install it by running:
```bash
pip install transformers --upgrade
```
And then you can run the following code:
```py
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
# Will error if the minimal version of Transformers is not installed.
from transformers.utils import check_min_version
check_min_version("4.21.0") | 17_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#running-a-tf-text-generation-model-with-xla-from--transformers | .md | check_min_version("4.21.0")
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2")
input_string = ["TensorFlow is"]
# One line to create an XLA generation function
xla_generate = tf.function(model.generate, jit_compile=True)
tokenized_input = tokenizer(input_string, return_tensors="tf")
generated_tokens = xla_generate(**tokenized_input, num_beams=2) | 17_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#running-a-tf-text-generation-model-with-xla-from--transformers | .md | decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
print(f"Generated -- {decoded_text}")
# Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the
``` | 17_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#running-a-tf-text-generation-model-with-xla-from--transformers | .md | # Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the
```
As you can notice, enabling XLA on `generate()` is just a single line of code. The rest of the code remains unchanged. However, there are a couple of gotchas in the above code snippet that are specific to XLA. You need to be aware of those to realize the speed-ups that XLA can bring in. We discuss these in the following section. | 17_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#gotchas-to-be-aware-of | .md | When you are executing an XLA-enabled function (like `xla_generate()` above) for the first time, it will internally try to infer the computation graph, which is time-consuming. This process is known as [“tracing”](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing). | 17_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#gotchas-to-be-aware-of | .md | You might notice that the generation time is not fast. Successive calls of `xla_generate()` (or any other XLA-enabled function) won’t have to infer the computation graph, given the inputs to the function follow the same shape with which the computation graph was initially built. While this is not a problem for modalities with fixed input shapes (e.g., images), you must pay attention if you are working with variable input shape modalities (e.g., text). | 17_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#gotchas-to-be-aware-of | .md | To ensure `xla_generate()` always operates with the same input shapes, you can specify the `padding` arguments when calling the tokenizer.
```py
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM | 17_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#gotchas-to-be-aware-of | .md | tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2")
input_string = ["TensorFlow is"]
xla_generate = tf.function(model.generate, jit_compile=True)
# Here, we call the tokenizer with padding options.
tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") | 17_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#gotchas-to-be-aware-of | .md | generated_tokens = xla_generate(**tokenized_input, num_beams=2)
decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
print(f"Generated -- {decoded_text}")
```
This way, you can ensure that the inputs to `xla_generate()` will always receive inputs with the shape it was traced with and thus leading to speed-ups in the generation time. You can verify this with the code below:
```py
import time
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM | 17_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#gotchas-to-be-aware-of | .md | tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2")
xla_generate = tf.function(model.generate, jit_compile=True) | 17_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#gotchas-to-be-aware-of | .md | xla_generate = tf.function(model.generate, jit_compile=True)
for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]:
tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf")
start = time.time_ns()
generated_tokens = xla_generate(**tokenized_input, num_beams=2)
end = time.time_ns()
print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n")
```
On a Tesla T4 GPU, you can expect the outputs like so:
```bash
Execution time -- 30819.6 ms | 17_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#gotchas-to-be-aware-of | .md | Execution time -- 79.0 ms
Execution time -- 78.9 ms
```
The first call to `xla_generate()` is time-consuming because of tracing, but the successive calls are orders of magnitude faster. Keep in mind that any change in the generation options at any point will trigger re-tracing and thus leading to slow-downs in the generation time.
We didn’t cover all the text generation options 🤗 Transformers provides in this document. We encourage you to read the documentation for advanced use cases. | 17_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#additional-resources | .md | Here, we leave you with some additional resources if you want to delve deeper into XLA in 🤗 Transformers and in general. | 17_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#additional-resources | .md | * [This Colab Notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/91_tf_xla_generate.ipynb) provides an interactive demonstration if you want to fiddle with the XLA-compatible encoder-decoder (like [T5](https://huggingface.co/docs/transformers/model_doc/t5)) and decoder-only (like [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)) text generation models. | 17_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#additional-resources | .md | * [This blog post](https://huggingface.co/blog/tf-xla-generate) provides an overview of the comparison benchmarks for XLA-compatible models along with a friendly introduction to XLA in TensorFlow.
* [This blog post](https://blog.tensorflow.org/2022/11/how-hugging-face-improved-text-generation-performance-with-xla.html) discusses our design philosophy behind adding XLA support to the TensorFlow models in 🤗 Transformers.
* Recommended posts for learning more about XLA and TensorFlow graphs in general: | 17_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tf_xla.md | https://huggingface.co/docs/transformers/en/tf_xla/#additional-resources | .md | * Recommended posts for learning more about XLA and TensorFlow graphs in general:
* [XLA: Optimizing Compiler for Machine Learning](https://www.tensorflow.org/xla)
* [Introduction to graphs and tf.function](https://www.tensorflow.org/guide/intro_to_graphs)
* [Better performance with tf.function](https://www.tensorflow.org/guide/function) | 17_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 18_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 18_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#llm-inference-optimization | .md | Large language models (LLMs) have pushed text generation applications, such as chat and code completion models, to the next level by producing text that displays a high level of understanding and fluency. But what makes LLMs so powerful - namely their size - also presents challenges for inference. | 18_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#llm-inference-optimization | .md | Basic inference is slow because LLMs have to be called repeatedly to generate the next token. The input sequence increases as generation progresses, which takes longer and longer for the LLM to process. LLMs also have billions of parameters, making it a challenge to store and handle all those weights in memory.
This guide will show you how to use the optimization techniques available in Transformers to accelerate LLM inference.
> [!TIP] | 18_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#llm-inference-optimization | .md | > [!TIP]
> Hugging Face also provides [Text Generation Inference (TGI)](https://hf.co/docs/text-generation-inference), a library dedicated to deploying and serving highly optimized LLMs for inference. It includes deployment-oriented optimization features not included in Transformers, such as continuous batching for increasing throughput and tensor parallelism for multi-GPU inference. | 18_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | During decoding, a LLM computes the key-value (kv) values for each input token and since it is autoregressive, it computes the same kv values each time because the generated output becomes part of the input now. This is not very efficient because you're recomputing the same kv values each time. | 18_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | To optimize this, you can use a kv-cache to store the past keys and values instead of recomputing them each time. However, since the kv-cache grows with each generation step and is dynamic, it prevents you from taking advantage of [`torch.compile`](./perf_torch_compile), a powerful optimization tool that fuses PyTorch code into fast and optimized kernels. We have an entire guide dedicated to kv-caches [here](./kv_cache). | 18_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | The *static kv-cache* solves this issue by pre-allocating the kv-cache size to a maximum value which allows you to combine it with `torch.compile` for up to a 4x speed up. Your speed up may vary depending on the model size (larger models have a smaller speed up) and hardware.
> [!WARNING] | 18_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | > [!WARNING]
> Currently, only [Llama](./model_doc/llama2) and a few other models support static kv-cache and `torch.compile`. Check [this issue](https://github.com/huggingface/transformers/issues/28981) for a live model compatibility list.
There are three flavors of static kv-cache usage, depending on the complexity of your task:
1. Basic usage: simply set a flag in `generation_config` (recommended);
2. Advanced usage: handle a cache object for multi-turn generation or a custom generation loop; | 18_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | 2. Advanced usage: handle a cache object for multi-turn generation or a custom generation loop;
3. Advanced usage: compile the entire `generate` function into a single graph, if having a single graph is relevant for you.
Select the correct tab below for further instructions on each of these flavors.
> [!TIP] | 18_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | Select the correct tab below for further instructions on each of these flavors.
> [!TIP]
> Regardless of the strategy used with `torch.compile`, you can avoid shape-related recompilations if you left-pad your LLM inputs to a limited set of values. The [`pad_to_multiple_of` tokenizer flag](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizer.__call__.pad_to_multiple_of) is your friend!
<hfoptions id="static-kv">
<hfoption id="basic usage: generation_config"> | 18_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | <hfoptions id="static-kv">
<hfoption id="basic usage: generation_config">
For this example, let's use the [Gemma](https://hf.co/google/gemma-2b) model. All we need to do is to:
1. Access the model's `generation_config` attribute and set the `cache_implementation` to "static";
2. Call `torch.compile` on the model to compile the forward pass with the static kv-cache.
And that's it!
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import os | 18_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | And that's it!
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To prevent long warnings :) | 18_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", torch_dtype="auto", device_map="auto")
model.generation_config.cache_implementation = "static"
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
input_text = "The theory of special relativity states "
input_ids = tokenizer(input_text, return_tensors="pt").to(model.device.type) | 18_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | outputs = model.generate(**input_ids)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['The theory of special relativity states 1. The speed of light is constant in all inertial reference']
```
Under the hood, `generate` will attempt to reuse the same cache object, removing the need for re-compilation at each call. Avoiding re-compilation is critical to get the most out of `torch.compile`, and you should be aware of the following: | 18_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | 1. If the batch size changes or the maximum output length increases between calls, the cache will have to be reinitialized, triggering a new compilation;
2. The first couple of calls of the compiled function are slower, as the function is being compiled.
> [!WARNING]
> For a more advanced usage of the static cache, such as multi-turn conversations, we recommend instantiating and manipulating the cache object outside [`~GenerationMixin.generate`]. See the advanced usage tab.
</hfoption> | 18_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | </hfoption>
<hfoption id="advanced usage: control Static Cache">
A [`StaticCache`] object can be passed to the model's [`~GenerationMixin.generate`] under the `past_key_values` argument. The object will retain the cache contents, so you can pass it to a new [`~GenerationMixin.generate`] call to continue generation, like you would do with a dynamic cache.
```py
from transformers import AutoTokenizer, AutoModelForCausalLM, StaticCache
import torch
import os | 18_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | ```py
from transformers import AutoTokenizer, AutoModelForCausalLM, StaticCache
import torch
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To prevent long warnings :) | 18_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", torch_dtype="auto", device_map="auto")
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
input_text = "The theory of special relativity states "
input_ids = tokenizer(input_text, return_tensors="pt").to(model.device.type)
prompt_length = input_ids.input_ids.shape[1]
model.generation_config.max_new_tokens = 16 | 18_2_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | past_key_values = StaticCache(
config=model.config,
batch_size=1,
# If you plan to reuse the cache, make sure the cache length is large enough for all cases
max_cache_len=prompt_length+(model.generation_config.max_new_tokens*2),
device=model.device,
dtype=model.dtype
)
outputs = model.generate(**input_ids, past_key_values=past_key_values)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) | 18_2_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['The theory of special relativity states 1. The speed of light is constant in all inertial reference frames. 2'] | 18_2_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | # pass in the generated text and the same cache object to continue generation from where it left off. Optionally, in a
# multi-turn conversation, append the new user input to the generated text.
new_input_ids = outputs
outputs = model.generate(new_input_ids, past_key_values=past_key_values)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) | 18_2_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['The theory of special relativity states 1. The speed of light is constant in all inertial reference frames. 2. The speed of light is constant in all inertial reference frames. 3.']
```
> [!TIP]
> If you want to reuse the same [`StaticCache`] object on a new prompt, be sure to reset its contents with the `.reset()` method between calls | 18_2_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | If you want to go further down a level, the [`StaticCache`] object can also be passed to the model's forward pass under the same `past_key_values` argument. Using this strategy, you can write your own function to decode the next token given the current token and position and cache position of previously generated tokens.
```py
from transformers import LlamaTokenizer, LlamaForCausalLM, StaticCache, logging
from transformers.testing_utils import CaptureLogger
import torch | 18_2_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | from transformers.testing_utils import CaptureLogger
import torch
from accelerate.test_utils.testing import get_backend | 18_2_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | prompts = [
"Simply put, the theory of relativity states that ",
"My favorite all time favorite condiment is ketchup.",
]
NUM_TOKENS_TO_GENERATE = 40
torch_device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) | 18_2_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", pad_token="</s>", padding_side="right")
model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", device_map="sequential")
inputs = tokenizer(prompts, return_tensors="pt", padding=True).to(model.device) | 18_2_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | def decode_one_tokens(model, cur_token, input_pos, cache_position, past_key_values):
logits = model(
cur_token,
position_ids=input_pos,
cache_position=cache_position,
past_key_values=past_key_values,
return_dict=False,
use_cache=True
)[0]
new_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
return new_token
```
There are a few important things you must do to enable static kv-cache and `torch.compile` with the `StaticCache` method: | 18_2_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | ```
There are a few important things you must do to enable static kv-cache and `torch.compile` with the `StaticCache` method:
1. Initialize the [`StaticCache`] instance before using the model for inference. There you can configure parameters like the maximum batch size and sequence length.
2. Call `torch.compile` on the model to compile the forward pass with the static kv-cache. | 18_2_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | 2. Call `torch.compile` on the model to compile the forward pass with the static kv-cache.
3. Use `SDPBackend.MATH` in the [torch.nn.attention.sdpa_kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html) context manager to enable the native PyTorch C++ implementation of scaled dot product attention to speed up inference even more.
```py
from torch.nn.attention import SDPBackend, sdpa_kernel | 18_2_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | batch_size, seq_length = inputs["input_ids"].shape
with torch.no_grad():
past_key_values = StaticCache(
config=model.config, batch_size=2, max_cache_len=4096, device=torch_device, dtype=model.dtype
)
cache_position = torch.arange(seq_length, device=torch_device)
generated_ids = torch.zeros(
batch_size, seq_length + NUM_TOKENS_TO_GENERATE + 1, dtype=torch.int, device=torch_device
)
generated_ids[:, cache_position] = inputs["input_ids"].to(torch_device).to(torch.int) | 18_2_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | logits = model(
**inputs, cache_position=cache_position, past_key_values=past_key_values,return_dict=False, use_cache=True
)[0]
next_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
generated_ids[:, seq_length] = next_token[:, 0] | 18_2_26 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | decode_one_tokens = torch.compile(decode_one_tokens, mode="reduce-overhead", fullgraph=True)
cache_position = torch.tensor([seq_length + 1], device=torch_device)
for _ in range(1, NUM_TOKENS_TO_GENERATE):
with sdpa_kernel(SDPBackend.MATH):
next_token = decode_one_tokens(model, next_token.clone(), None, cache_position, past_key_values)
generated_ids[:, cache_position] = next_token.int()
cache_position += 1 | 18_2_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
text
['Simply put, the theory of relativity states that 1) the speed of light is constant, 2) the speed of light is the same for all observers, and 3) the laws of physics are the same for all observers.',
'My favorite all time favorite condiment is ketchup. I love it on everything. I love it on my eggs, my fries, my chicken, my burgers, my hot dogs, my sandwiches, my salads, my p']
```
</hfoption> | 18_2_28 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | ```
</hfoption>
<hfoption id="advanced usage: end-to-end generate compilation">
Compiling the entire `generate` function, in terms of code, is even simpler than in the basic usage: call `torch.compile` on `generate` to compile the entire function. No need to specify the use of the static cache: although it is compatible, dynamic cache (default) was faster in our benchmarks.
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import os | 18_2_29 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | ```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To prevent long warnings :) | 18_2_30 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", torch_dtype="auto", device_map="auto")
model.generate = torch.compile(model.generate, mode="reduce-overhead", fullgraph=True)
input_text = "The theory of special relativity states "
input_ids = tokenizer(input_text, return_tensors="pt").to(model.device.type) | 18_2_31 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | outputs = model.generate(**input_ids)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['The theory of special relativity states 1. The speed of light is constant in all inertial reference']
``` | 18_2_32 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | ['The theory of special relativity states 1. The speed of light is constant in all inertial reference']
```
As a result, we compile not only the model forward pass, but also all input preparation, logit processor operations, and so on. The result should be a slightly `generate` call, compared to the basic usage example, and the compiled graph may be better suited to more exotic hardware devices or use cases. However, there are severe drawbacks in using this approach:
1. Compilation is much slower; | 18_2_33 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#static-kv-cache-and-torchcompile | .md | 1. Compilation is much slower;
2. All parameterization of `generate` must be done through `generation_config`;
3. Many warnings and exceptions are suppressed -- we suggest testing with its uncompiled form first;
4. Although we are working on it, it is heavily feature restricted (for instance, at the time of writing, generation does not stop if an EOS token is selected).
</hfoption>
</hfoptions> | 18_2_34 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#speculative-decoding | .md | > [!TIP]
> For a more in-depth explanation, take a look at the [Assisted Generation: a new direction toward low-latency text generation](https://hf.co/blog/assisted-generation) blog post! | 18_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#speculative-decoding | .md | Another issue with autoregression is that for each input token you need to load the model weights each time during the forward pass. This is slow and cumbersome for LLMs which have billions of parameters. Speculative decoding alleviates this slowdown by using a second smaller and faster assistant model to generate candidate tokens that are verified by the larger LLM in a single forward pass. If the verified tokens are correct, the LLM essentially gets them for "free" without having to generate them itself. | 18_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#speculative-decoding | .md | pass. If the verified tokens are correct, the LLM essentially gets them for "free" without having to generate them itself. There is no degradation in accuracy because the verification forward pass ensures the same outputs are generated as if the LLM had generated them on its own. | 18_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#speculative-decoding | .md | To get the largest speed up, the assistant model should be a lot smaller than the LLM so that it can generate tokens quickly. The assistant and LLM model must also share the same tokenizer to avoid re-encoding and decoding tokens.
> [!WARNING]
> Speculative decoding is only supported for the greedy search and sampling decoding strategies, and it also doesn't support batched inputs.
Enable speculative decoding by loading an assistant model and passing it to the [`~GenerationMixin.generate`] method. | 18_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#speculative-decoding | .md | Enable speculative decoding by loading an assistant model and passing it to the [`~GenerationMixin.generate`] method.
<hfoptions id="spec-decoding">
<hfoption id="greedy search">
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from accelerate.test_utils.testing import get_backend | 18_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#speculative-decoding | .md | device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
inputs = tokenizer("Einstein's theory of relativity states", return_tensors="pt").to(device) | 18_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#speculative-decoding | .md | model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b", torch_dtype="auto").to(device)
assistant_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m").to(device)
outputs = model.generate(**inputs, assistant_model=assistant_model)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
["Einstein's theory of relativity states that the speed of light is constant. "]
```
</hfoption>
<hfoption id="sampling"> | 18_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#speculative-decoding | .md | ```
</hfoption>
<hfoption id="sampling">
For speculative sampling decoding, add the `do_sample` and `temperature` parameters to the [`~GenerationMixin.generate`] method in addition to the assistant model.
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from accelerate.test_utils.testing import get_backend | 18_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#speculative-decoding | .md | device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
inputs = tokenizer("Einstein's theory of relativity states", return_tensors="pt").to(device) | 18_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#speculative-decoding | .md | model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b", torch_dtype="auto").to(device)
assistant_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m").to(device)
outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.7)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
["Einstein's theory of relativity states that motion in the universe is not a straight line.\n"]
```
</hfoption>
</hfoptions> | 18_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#prompt-lookup-decoding | .md | Prompt lookup decoding is a variant of speculative decoding that is also compatible with greedy search and sampling. Prompt lookup works especially well for input-grounded tasks - such as summarization - where there is often overlapping words between the prompt and output. These overlapping n-grams are used as the LLM candidate tokens. | 18_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#prompt-lookup-decoding | .md | To enable prompt lookup decoding, specify the number of tokens that should be overlapping in the `prompt_lookup_num_tokens` parameter. Then you can pass this parameter to the [`~GenerationMixin.generate`] method.
<hfoptions id="pld">
<hfoption id="greedy decoding">
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from accelerate.test_utils.testing import get_backend | 18_4_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.