source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#optimizing-llms-for-speed-and-memory | .md | Deploying these models in real-world tasks remains challenging, however:
- To exhibit near-human text understanding and generation capabilities, LLMs currently require to be composed of billions of parameters (see [Kaplan et al](https://arxiv.org/abs/2001.08361), [Wei et. al](https://arxiv.org/abs/2206.07682)). This consequently amplifies the memory demands for inference. | 43_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#optimizing-llms-for-speed-and-memory | .md | - In many real-world tasks, LLMs need to be given extensive contextual information. This necessitates the model's capability to manage very long input sequences during inference.
The crux of these challenges lies in augmenting the computational and memory capabilities of LLMs, especially when handling expansive input sequences.
In this guide, we will go over the effective techniques for efficient LLM deployment: | 43_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#optimizing-llms-for-speed-and-memory | .md | In this guide, we will go over the effective techniques for efficient LLM deployment:
1. **Lower Precision:** Research has shown that operating at reduced numerical precision, namely [8-bit and 4-bit](./main_classes/quantization.md) can achieve computational advantages without a considerable decline in model performance. | 43_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#optimizing-llms-for-speed-and-memory | .md | 2. **Flash Attention:** Flash Attention is a variation of the attention algorithm that not only provides a more memory-efficient approach but also realizes increased efficiency due to optimized GPU memory utilization. | 43_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#optimizing-llms-for-speed-and-memory | .md | 3. **Architectural Innovations:** Considering that LLMs are always deployed in the same way during inference, namely autoregressive text generation with a long input context, specialized model architectures have been proposed that allow for more efficient inference. The most important advancement in model architectures hereby are [Alibi](https://arxiv.org/abs/2108.12409), [Rotary embeddings](https://arxiv.org/abs/2104.09864), [Multi-Query Attention (MQA)](https://arxiv.org/abs/1911.02150) and | 43_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#optimizing-llms-for-speed-and-memory | .md | [Rotary embeddings](https://arxiv.org/abs/2104.09864), [Multi-Query Attention (MQA)](https://arxiv.org/abs/1911.02150) and [Grouped-Query-Attention (GQA)]((https://arxiv.org/abs/2305.13245)). | 43_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#optimizing-llms-for-speed-and-memory | .md | Throughout this guide, we will offer an analysis of auto-regressive generation from a tensor's perspective. We delve into the pros and cons of adopting lower precision, provide a comprehensive exploration of the latest attention algorithms, and discuss improved LLM architectures. While doing so, we run practical examples showcasing each of the feature improvements. | 43_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | Memory requirements of LLMs can be best understood by seeing the LLM as a set of weight matrices and vectors and the text inputs as a sequence of vectors. In the following, the definition *weights* will be used to signify all model weight matrices and vectors. | 43_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | At the time of writing this guide, LLMs consist of at least a couple billion parameters. Each parameter thereby is made of a decimal number, e.g. `4.5689` which is usually stored in either [float32](https://en.wikipedia.org/wiki/Single-precision_floating-point_format), [bfloat16](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format), or [float16](https://en.wikipedia.org/wiki/Half-precision_floating-point_format) format. This allows us to easily compute the memory requirement to load the LLM into | 43_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | format. This allows us to easily compute the memory requirement to load the LLM into memory: | 43_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | > *Loading the weights of a model having X billion parameters requires roughly 4 * X GB of VRAM in float32 precision*
Nowadays, models are however rarely trained in full float32 precision, but usually in bfloat16 precision or less frequently in float16 precision. Therefore the rule of thumb becomes:
> *Loading the weights of a model having X billion parameters requires roughly 2 * X GB of VRAM in bfloat16/float16 precision* | 43_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | For shorter text inputs (less than 1024 tokens), the memory requirement for inference is very much dominated by the memory requirement to load the weights. Therefore, for now, let's assume that the memory requirement for inference is equal to the memory requirement to load the model into the GPU VRAM.
To give some examples of how much VRAM it roughly takes to load a model in bfloat16:
- **GPT3** requires 2 \* 175 GB = **350 GB** VRAM | 43_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | - **GPT3** requires 2 \* 175 GB = **350 GB** VRAM
- [**Bloom**](https://huggingface.co/bigscience/bloom) requires 2 \* 176 GB = **352 GB** VRAM
- [**Llama-2-70b**](https://huggingface.co/meta-llama/Llama-2-70b-hf) requires 2 \* 70 GB = **140 GB** VRAM
- [**Falcon-40b**](https://huggingface.co/tiiuae/falcon-40b) requires 2 \* 40 GB = **80 GB** VRAM
- [**MPT-30b**](https://huggingface.co/mosaicml/mpt-30b) requires 2 \* 30 GB = **60 GB** VRAM | 43_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | - [**MPT-30b**](https://huggingface.co/mosaicml/mpt-30b) requires 2 \* 30 GB = **60 GB** VRAM
- [**bigcode/starcoder**](https://huggingface.co/bigcode/starcoder) requires 2 \* 15.5 = **31 GB** VRAM | 43_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | As of writing this document, the largest GPU chip on the market is the A100 & H100 offering 80GB of VRAM. Most of the models listed before require more than 80GB just to be loaded and therefore necessarily require [tensor parallelism](https://huggingface.co/docs/transformers/perf_train_gpu_many#tensor-parallelism) and/or [pipeline parallelism](https://huggingface.co/docs/transformers/perf_train_gpu_many#naive-model-parallelism-vertical-and-pipeline-parallelism). | 43_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | 🤗 Transformers does not support tensor parallelism out of the box as it requires the model architecture to be written in a specific way. If you're interested in writing models in a tensor-parallelism-friendly way, feel free to have a look at [the text-generation-inference library](https://github.com/huggingface/text-generation-inference/tree/main/server/text_generation_server/models/custom_modeling). | 43_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | Naive pipeline parallelism is supported out of the box. For this, simply load the model with `device="auto"` which will automatically place the different layers on the available GPUs as explained [here](https://huggingface.co/docs/accelerate/v0.22.0/en/concept_guides/big_model_inference). | 43_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | Note, however that while very effective, this naive pipeline parallelism does not tackle the issues of GPU idling. For this more advanced pipeline parallelism is required as explained [here](https://huggingface.co/docs/transformers/en/perf_train_gpu_many#naive-model-parallelism-vertical-and-pipeline-parallelism).
If you have access to an 8 x 80GB A100 node, you could load BLOOM as follows
```bash
!pip install transformers accelerate bitsandbytes optimum
```
```python | 43_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | ```bash
!pip install transformers accelerate bitsandbytes optimum
```
```python
from transformers import AutoModelForCausalLM | 43_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | model = AutoModelForCausalLM.from_pretrained("bigscience/bloom", device_map="auto", pad_token_id=0)
```
By using `device_map="auto"` the attention layers would be equally distributed over all available GPUs. | 43_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | ```
By using `device_map="auto"` the attention layers would be equally distributed over all available GPUs.
In this guide, we will use [bigcode/octocoder](https://huggingface.co/bigcode/octocoder) as it can be run on a single 40 GB A100 GPU device chip. Note that all memory and speed optimizations that we will apply going forward, are equally applicable to models that require model or tensor parallelism. | 43_2_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | Since the model is loaded in bfloat16 precision, using our rule of thumb above, we would expect the memory requirement to run inference with `bigcode/octocoder` to be around 31 GB VRAM. Let's give it a try.
We first load the model and tokenizer and then pass both to Transformers' [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines) object.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch | 43_2_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", torch_dtype=torch.bfloat16, device_map="auto", pad_token_id=0)
tokenizer = AutoTokenizer.from_pretrained("bigcode/octocoder")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
```
```python
prompt = "Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer:" | 43_2_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):]
result
```
**Output**:
```
Here is a Python function that transforms bytes to Giga bytes:\n\n```python\ndef bytes_to_giga_bytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single
```
Nice, we can now directly use the result to convert bytes into Gigabytes.
```python
def bytes_to_giga_bytes(bytes):
return bytes / 1024 / 1024 / 1024
``` | 43_2_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | ```python
def bytes_to_giga_bytes(bytes):
return bytes / 1024 / 1024 / 1024
```
Let's call [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_allocated.html) to measure the peak GPU memory allocation.
```python
bytes_to_giga_bytes(torch.cuda.max_memory_allocated())
```
**Output**:
```bash
29.0260648727417
``` | 43_2_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | ```python
bytes_to_giga_bytes(torch.cuda.max_memory_allocated())
```
**Output**:
```bash
29.0260648727417
```
Close enough to our back-of-the-envelope computation! We can see the number is not exactly correct as going from bytes to kilobytes requires a multiplication of 1024 instead of 1000. Therefore the back-of-the-envelope formula can also be understood as an "at most X GB" computation. | 43_2_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | Note that if we had tried to run the model in full float32 precision, a whopping 64 GB of VRAM would have been required.
> Almost all models are trained in bfloat16 nowadays, there is no reason to run the model in full float32 precision if [your GPU supports bfloat16](https://discuss.pytorch.org/t/bfloat16-native-support/117155/5). Float32 won't give better inference results than the precision that was used to train the model. | 43_2_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | If you are unsure in which format the model weights are stored on the Hub, you can always look into the checkpoint's config under `"torch_dtype"`, *e.g.* [here](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/6fdf2e60f86ff2481f2241aaee459f85b5b0bbb9/config.json#L21). It is recommended to set the model to the same precision type as written in the config when loading with `from_pretrained(..., torch_dtype=...)` except when the original type is float32 in which case one can use both `float16` or | 43_2_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | `from_pretrained(..., torch_dtype=...)` except when the original type is float32 in which case one can use both `float16` or `bfloat16` for inference. | 43_2_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | Let's define a `flush(...)` function to free all allocated memory so that we can accurately measure the peak allocated GPU memory.
```python
del pipe
del model | 43_2_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | import gc
import torch | 43_2_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | def flush():
gc.collect()
torch.cuda.empty_cache()
torch.cuda.reset_peak_memory_stats()
```
Let's call it now for the next experiment.
```python
flush()
```
From the Accelerate library, you can also use a device-agnostic utility method called [release_memory](https://github.com/huggingface/accelerate/blob/29be4788629b772a3b722076e433b5b3b5c85da3/src/accelerate/utils/memory.py#L63), which takes various hardware backends like XPU, MLU, NPU, MPS, and more into account.
```python | 43_2_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | ```python
from accelerate.utils import release_memory
# ... | 43_2_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | release_memory(model)
```
Now what if your GPU does not have 32 GB of VRAM? It has been found that model weights can be quantized to 8-bit or 4-bits without a significant loss in performance (see [Dettmers et al.](https://arxiv.org/abs/2208.07339)).
Model can be quantized to even 3 or 2 bits with an acceptable loss in performance as shown in the recent [GPTQ paper](https://arxiv.org/abs/2210.17323) 🤯. | 43_2_26 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | Without going into too many details, quantization schemes aim at reducing the precision of weights while trying to keep the model's inference results as accurate as possible (*a.k.a* as close as possible to bfloat16).
Note that quantization works especially well for text generation since all we care about is choosing the *set of most likely next tokens* and don't really care about the exact values of the next token *logit* distribution. | 43_2_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | All that matters is that the next token *logit* distribution stays roughly the same so that an `argmax` or `topk` operation gives the same results.
There are various quantization techniques, which we won't discuss in detail here, but in general, all quantization techniques work as follows:
- 1. Quantize all weights to the target precision
- 2. Load the quantized weights, and pass the input sequence of vectors in bfloat16 precision | 43_2_28 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | - 2. Load the quantized weights, and pass the input sequence of vectors in bfloat16 precision
- 3. Dynamically dequantize weights to bfloat16 to perform the computation with their input vectors in bfloat16 precision
In a nutshell, this means that *inputs-weight matrix* multiplications, with \\( X \\) being the *inputs*, \\( W \\) being a weight matrix and \\( Y \\) being the output:
$$ Y = X * W $$
are changed to
$$ Y = X * \text{dequantize}(W) $$ | 43_2_29 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | $$ Y = X * W $$
are changed to
$$ Y = X * \text{dequantize}(W) $$
for every matrix multiplication. Dequantization and re-quantization is performed sequentially for all weight matrices as the inputs run through the network graph.
Therefore, inference time is often **not** reduced when using quantized weights, but rather increases.
Enough theory, let's give it a try! To quantize the weights with Transformers, you need to make sure that | 43_2_30 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | Enough theory, let's give it a try! To quantize the weights with Transformers, you need to make sure that
the [`bitsandbytes`](https://github.com/bitsandbytes-foundation/bitsandbytes) library is installed.
```bash
!pip install bitsandbytes
```
We can then load models in 8-bit quantization by simply adding a `load_in_8bit=True` flag to `from_pretrained`.
```python
model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", load_in_8bit=True, pad_token_id=0)
``` | 43_2_31 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | ```python
model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", load_in_8bit=True, pad_token_id=0)
```
Now, let's run our example again and measure the memory usage.
```python
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) | 43_2_32 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):]
result
```
**Output**:
```
Here is a Python function that transforms bytes to Giga bytes:\n\n```python\ndef bytes_to_giga_bytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single
```
Nice, we're getting the same result as before, so no loss in accuracy! Let's look at how much memory was used this time.
```python
bytes_to_giga_bytes(torch.cuda.max_memory_allocated())
```
**Output**:
``` | 43_2_33 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | ```python
bytes_to_giga_bytes(torch.cuda.max_memory_allocated())
```
**Output**:
```
15.219234466552734
```
Significantly less! We're down to just a bit over 15 GBs and could therefore run this model on consumer GPUs like the 4090.
We're seeing a very nice gain in memory efficiency and more or less no degradation to the model's output. However, we can also notice a slight slow-down during inference.
We delete the models and flush the memory again.
```python
del model
del pipe
```
```python
flush() | 43_2_34 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | We delete the models and flush the memory again.
```python
del model
del pipe
```
```python
flush()
```
Let's see what peak GPU memory consumption 4-bit quantization gives. Quantizing the model to 4-bit can be done with the same API as before - this time by passing `load_in_4bit=True` instead of `load_in_8bit=True`.
```python
model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", load_in_4bit=True, low_cpu_mem_usage=True, pad_token_id=0) | 43_2_35 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) | 43_2_36 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):]
result
```
**Output**:
```
Here is a Python function that transforms bytes to Giga bytes:\n\n```\ndef bytes_to_gigabytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single argument
```
We're almost seeing the same output text as before - just the `python` is missing just before the code snippet. Let's see how much memory was required.
```python | 43_2_37 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | ```python
bytes_to_giga_bytes(torch.cuda.max_memory_allocated())
```
**Output**:
```
9.543574333190918
```
Just 9.5GB! That's really not a lot for a >15 billion parameter model.
While we see very little degradation in accuracy for our model here, 4-bit quantization can in practice often lead to different results compared to 8-bit quantization or full `bfloat16` inference. It is up to the user to try it out. | 43_2_38 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | Also note that inference here was again a bit slower compared to 8-bit quantization which is due to the more aggressive quantization method used for 4-bit quantization leading to \\( \text{quantize} \\) and \\( \text{dequantize} \\) taking longer during inference.
```python
del model
del pipe
```
```python
flush()
``` | 43_2_39 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | ```python
del model
del pipe
```
```python
flush()
```
Overall, we saw that running OctoCoder in 8-bit precision reduced the required GPU VRAM from 32G GPU VRAM to only 15GB and running the model in 4-bit precision further reduces the required GPU VRAM to just a bit over 9GB.
4-bit quantization allows the model to be run on GPUs such as RTX3090, V100, and T4 which are quite accessible for most people. | 43_2_40 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | For more information on quantization and to see how one can quantize models to require even less GPU VRAM memory than 4-bit, we recommend looking into the [`AutoGPTQ`](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#autogptq-integration%60) implementation.
> As a conclusion, it is important to remember that model quantization trades improved memory efficiency against accuracy and in some cases inference time. | 43_2_41 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | If GPU memory is not a constraint for your use case, there is often no need to look into quantization. However many GPUs simply can't run LLMs without quantization methods and in this case, 4-bit and 8-bit quantization schemes are extremely useful tools.
For more in-detail usage information, we strongly recommend taking a look at the [Transformers Quantization Docs](https://huggingface.co/docs/transformers/main_classes/quantization#general-usage). | 43_2_42 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#1-lower-precision | .md | Next, let's look into how we can improve computational and memory efficiency by using better algorithms and an improved model architecture. | 43_2_43 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | Today's top-performing LLMs share more or less the same fundamental architecture that consists of feed-forward layers, activation layers, layer normalization layers, and most crucially, self-attention layers.
Self-attention layers are central to Large Language Models (LLMs) in that they enable the model to understand the contextual relationships between input tokens. | 43_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | However, the peak GPU memory consumption for self-attention layers grows *quadratically* both in compute and memory complexity with number of input tokens (also called *sequence length*) that we denote in the following by \\( N \\) .
While this is not really noticeable for shorter input sequences (of up to 1000 input tokens), it becomes a serious problem for longer input sequences (at around 16000 input tokens). | 43_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | Let's take a closer look. The formula to compute the output \\( \mathbf{O} \\) of a self-attention layer for an input \\( \mathbf{X} \\) of length \\( N \\) is:
$$ \textbf{O} = \text{Attn}(\mathbf{X}) = \mathbf{V} \times \text{Softmax}(\mathbf{QK}^T) \text{ with } \mathbf{Q} = \mathbf{W}_q \mathbf{X}, \mathbf{V} = \mathbf{W}_v \mathbf{X}, \mathbf{K} = \mathbf{W}_k \mathbf{X} $$ | 43_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | \\( \mathbf{X} = (\mathbf{x}_1, ... \mathbf{x}_{N}) \\) is thereby the input sequence to the attention layer. The projections \\( \mathbf{Q} \\) and \\( \mathbf{K} \\) will each consist of \\( N \\) vectors resulting in the \\( \mathbf{QK}^T \\) being of size \\( N^2 \\) .
LLMs usually have multiple attention heads, thus doing multiple self-attention computations in parallel. | 43_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | Assuming, the LLM has 40 attention heads and runs in bfloat16 precision, we can calculate the memory requirement to store the \\( \mathbf{QK^T} \\) matrices to be \\( 40 * 2 * N^2 \\) bytes. For \\( N=1000 \\) only around 50 MB of VRAM are needed, however, for \\( N=16000 \\) we would need 19 GB of VRAM, and for \\( N=100,000 \\) we would need almost 1TB just to store the \\( \mathbf{QK}^T \\) matrices. | 43_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | Long story short, the default self-attention algorithm quickly becomes prohibitively memory-expensive for large input contexts.
As LLMs improve in text comprehension and generation, they are applied to increasingly complex tasks. While models once handled the translation or summarization of a few sentences, they now manage entire pages, demanding the capability to process extensive input lengths. | 43_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | How can we get rid of the exorbitant memory requirements for large input lengths? We need a new way to compute the self-attention mechanism that gets rid of the \\( QK^T \\) matrix. [Tri Dao et al.](https://arxiv.org/abs/2205.14135) developed exactly such a new algorithm and called it **Flash Attention**. | 43_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | In a nutshell, Flash Attention breaks the \\(\mathbf{V} \times \text{Softmax}(\mathbf{QK}^T\\)) computation apart and instead computes smaller chunks of the output by iterating over multiple softmax computation steps:
$$ \textbf{O}_i \leftarrow s^a_{ij} * \textbf{O}_i + s^b_{ij} * \mathbf{V}_{j} \times \text{Softmax}(\mathbf{QK}^T_{i,j}) \text{ for multiple } i, j \text{ iterations} $$ | 43_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | with \\( s^a_{ij} \\) and \\( s^b_{ij} \\) being some softmax normalization statistics that need to be recomputed for every \\( i \\) and \\( j \\) .
Please note that the whole Flash Attention is a bit more complex and is greatly simplified here as going in too much depth is out of scope for this guide. The reader is invited to take a look at the well-written [Flash Attention paper](https://arxiv.org/abs/2205.14135) for more details.
The main takeaway here is: | 43_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | The main takeaway here is:
> By keeping track of softmax normalization statistics and by using some smart mathematics, Flash Attention gives **numerical identical** outputs compared to the default self-attention layer at a memory cost that only increases linearly with \\( N \\) . | 43_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | Looking at the formula, one would intuitively say that Flash Attention must be much slower compared to the default self-attention formula as more computation needs to be done. Indeed Flash Attention requires more FLOPs compared to normal attention as the softmax normalization statistics have to constantly be recomputed (see [paper](https://arxiv.org/abs/2205.14135) for more details if interested) | 43_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | > However, Flash Attention is much faster in inference compared to default attention which comes from its ability to significantly reduce the demands on the slower, high-bandwidth memory of the GPU (VRAM), focusing instead on the faster on-chip memory (SRAM).
Essentially, Flash Attention makes sure that all intermediate write and read operations can be done using the fast *on-chip* SRAM memory instead of having to access the slower VRAM memory to compute the output vector \\( \mathbf{O} \\) . | 43_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | In practice, there is currently absolutely no reason to **not** use Flash Attention if available. The algorithm gives mathematically the same outputs, and is both faster and more memory-efficient.
Let's look at a practical example.
Our OctoCoder model now gets a significantly longer input prompt which includes a so-called *system prompt*. System prompts are used to steer the LLM into a better assistant that is tailored to the users' task. | 43_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | In the following, we use a system prompt that will make OctoCoder a better coding assistant.
```python
system_prompt = """Below are a series of dialogues between various people and an AI technical assistant.
The assistant tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble but knowledgeable.
The assistant is happy to help with code questions and will do their best to understand exactly what is needed. | 43_3_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | The assistant is happy to help with code questions and will do their best to understand exactly what is needed.
It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer.
That said, the assistant is practical really does its best, and doesn't let caution get too much in the way of being useful. | 43_3_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | The Starcoder models are a series of 15.5B parameter models trained on 80+ programming languages from The Stack (v1.2) (excluding opt-out requests).
The model uses Multi Query Attention, was trained using the Fill-in-the-Middle objective, and with 8,192 tokens context window for a trillion tokens of heavily deduplicated data.
-----
Question: Write a function that takes two lists and returns a list that has alternating elements from each input list.
Answer: Sure. Here is a function that does that. | 43_3_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | Answer: Sure. Here is a function that does that.
def alternating(list1, list2):
results = []
for i in range(len(list1)):
results.append(list1[i])
results.append(list2[i])
return results
Question: Can you write some test cases for this function?
Answer: Sure, here are some tests.
assert alternating([10, 20, 30], [1, 2, 3]) == [10, 1, 20, 2, 30, 3]
assert alternating([True, False], [4, 5]) == [True, 4, False, 5]
assert alternating([], []) == [] | 43_3_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | Question: Modify the function so that it returns all input elements when the lists have uneven length. The elements from the longer list should be at the end.
Answer: Here is the modified function.
def alternating(list1, list2):
results = []
for i in range(min(len(list1), len(list2))):
results.append(list1[i])
results.append(list2[i])
if len(list1) > len(list2):
results.extend(list1[i+1:])
else:
results.extend(list2[i+1:])
return results | 43_3_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | -----
"""
```
For demonstration purposes, we duplicate the system prompt by ten so that the input length is long enough to observe Flash Attention's memory savings.
We append the original text prompt `"Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer: Here"`
```python
long_prompt = 10 * system_prompt + prompt
```
We instantiate our model again in bfloat16 precision.
```python | 43_3_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | ```python
long_prompt = 10 * system_prompt + prompt
```
We instantiate our model again in bfloat16 precision.
```python
model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("bigcode/octocoder") | 43_3_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
```
Let's now run the model just like before *without Flash Attention* and measure the peak GPU memory requirement and inference time.
```python
import time
start_time = time.time()
result = pipe(long_prompt, max_new_tokens=60)[0]["generated_text"][len(long_prompt):] | 43_3_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | print(f"Generated in {time.time() - start_time} seconds.")
result
```
**Output**:
```
Generated in 10.96854019165039 seconds.
Sure. Here is a function that does that.\n\ndef bytes_to_giga(bytes):\n return bytes / 1024 / 1024 / 1024\n\nAnswer: Sure. Here is a function that does that.\n\ndef
```` | 43_3_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | ````
We're getting the same output as before, however this time, the model repeats the answer multiple times until it's 60 tokens cut-off. This is not surprising as we've repeated the system prompt ten times for demonstration purposes and thus cued the model to repeat itself.
**Note** that the system prompt should not be repeated ten times in real-world applications - one time is enough!
Let's measure the peak GPU memory requirement.
```python
bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) | 43_3_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | Let's measure the peak GPU memory requirement.
```python
bytes_to_giga_bytes(torch.cuda.max_memory_allocated())
```
**Output**:
```bash
37.668193340301514
```
As we can see the peak GPU memory requirement is now significantly higher than in the beginning, which is largely due to the longer input sequence. Also the generation takes a little over a minute now.
We call `flush()` to free GPU memory for our next experiment.
```python
flush()
``` | 43_3_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | We call `flush()` to free GPU memory for our next experiment.
```python
flush()
```
For comparison, let's run the same function, but enable Flash Attention instead.
To do so, we convert the model to [BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview) and by doing so enabling PyTorch's [SDPA self-attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention) which in turn is able to use Flash Attention.
```python | 43_3_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | ```python
model.to_bettertransformer()
```
Now we run the exact same code snippet as before and under the hood Transformers will make use of Flash Attention.
```py
start_time = time.time()
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
result = pipe(long_prompt, max_new_tokens=60)[0]["generated_text"][len(long_prompt):] | 43_3_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | print(f"Generated in {time.time() - start_time} seconds.")
result
```
**Output**:
```
Generated in 3.0211617946624756 seconds.
Sure. Here is a function that does that.\n\ndef bytes_to_giga(bytes):\n return bytes / 1024 / 1024 / 1024\n\nAnswer: Sure. Here is a function that does that.\n\ndef
```
We're getting the exact same result as before, but can observe a very significant speed-up thanks to Flash Attention.
Let's measure the memory consumption one last time.
```python | 43_3_26 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | Let's measure the memory consumption one last time.
```python
bytes_to_giga_bytes(torch.cuda.max_memory_allocated())
```
**Output**:
```
32.617331981658936
```
And we're almost back to our original 29GB peak GPU memory from the beginning.
We can observe that we only use roughly 100MB more GPU memory when passing a very long input sequence with Flash Attention compared to passing a short input sequence as done in the beginning.
```py
flush()
``` | 43_3_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#2-flash-attention | .md | ```py
flush()
```
For more information on how to use Flash Attention, please have a look at [this doc page](https://huggingface.co/docs/transformers/en/perf_infer_gpu_one#flashattention-2). | 43_3_28 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#3-architectural-innovations | .md | So far we have looked into improving computational and memory efficiency by:
- Casting the weights to a lower precision format
- Replacing the self-attention algorithm with a more memory- and compute efficient version
Let's now look into how we can change the architecture of an LLM so that it is most effective and efficient for task that require long text inputs, *e.g.*:
- Retrieval augmented Questions Answering,
- Summarization,
- Chat | 43_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#3-architectural-innovations | .md | - Retrieval augmented Questions Answering,
- Summarization,
- Chat
Note that *chat* not only requires the LLM to handle long text inputs, but it also necessitates that the LLM is able to efficiently handle the back-and-forth dialogue between user and assistant (such as ChatGPT).
Once trained, the fundamental LLM architecture is difficult to change, so it is important to make considerations about the LLM's tasks beforehand and accordingly optimize the model's architecture. | 43_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#3-architectural-innovations | .md | There are two important components of the model architecture that quickly become memory and/or performance bottlenecks for large input sequences.
- The positional embeddings
- The key-value cache
Let's go over each component in more detail | 43_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | Self-attention puts each token in relation to each other's tokens.
As an example, the \\( \text{Softmax}(\mathbf{QK}^T) \\) matrix of the text input sequence *"Hello", "I", "love", "you"* could look as follows:
 | 43_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | 
Each word token is given a probability mass at which it attends all other word tokens and, therefore is put into relation with all other word tokens. E.g. the word *"love"* attends to the word *"Hello"* with 5%, to *"I"* with 30%, and to itself with 65%.
A LLM based on self-attention, but without position embeddings would have great difficulties in understanding the positions of the text inputs to each other. | 43_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | This is because the probability score computed by \\( \mathbf{QK}^T \\) relates each word token to each other word token in \\( O(1) \\) computations regardless of their relative positional distance to each other.
Therefore, for the LLM without position embeddings each token appears to have the same distance to all other tokens, *e.g.* differentiating between *"Hello I love you"* and *"You love I hello"* would be very challenging. | 43_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | For the LLM to understand sentence order, an additional *cue* is needed and is usually applied in the form of *positional encodings* (or also called *positional embeddings*).
Positional encodings, encode the position of each token into a numerical presentation that the LLM can leverage to better understand sentence order. | 43_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | The authors of the [*Attention Is All You Need*](https://arxiv.org/abs/1706.03762) paper introduced sinusoidal positional embeddings \\( \mathbf{P} = \mathbf{p}_1, \ldots, \mathbf{p}_N \\) .
where each vector \\( \mathbf{p}_i \\) is computed as a sinusoidal function of its position \\( i \\) . | 43_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | where each vector \\( \mathbf{p}_i \\) is computed as a sinusoidal function of its position \\( i \\) .
The positional encodings are then simply added to the input sequence vectors \\( \mathbf{\hat{X}} = \mathbf{\hat{x}}_1, \ldots, \mathbf{\hat{x}}_N \\) = \\( \mathbf{x}_1 + \mathbf{p}_1, \ldots, \mathbf{x}_N + \mathbf{p}_N \\) thereby cueing the model to better learn sentence order. | 43_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | Instead of using fixed position embeddings, others (such as [Devlin et al.](https://arxiv.org/abs/1810.04805)) used learned positional encodings for which the positional embeddings
\\( \mathbf{P} \\) are learned during training.
Sinusoidal and learned position embeddings used to be the predominant methods to encode sentence order into LLMs, but a couple of problems related to these positional encodings were found: | 43_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | 1. Sinusoidal and learned position embeddings are both absolute positional embeddings, *i.e.* encoding a unique embedding for each position id: \\( 0, \ldots, N \\) . As shown by [Huang et al.](https://arxiv.org/abs/2009.13658) and [Su et al.](https://arxiv.org/abs/2104.09864), absolute positional embeddings lead to poor LLM performance for long text inputs. For long text inputs, it is advantageous if the model learns the relative positional distance input tokens have to each other instead of their | 43_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | it is advantageous if the model learns the relative positional distance input tokens have to each other instead of their absolute position. | 43_5_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | 2. When using learned position embeddings, the LLM has to be trained on a fixed input length \\( N \\), which makes it difficult to extrapolate to an input length longer than what it was trained on.
Recently, relative positional embeddings that can tackle the above mentioned problems have become more popular, most notably:
- [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864)
- [ALiBi](https://arxiv.org/abs/2108.12409) | 43_5_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | - [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864)
- [ALiBi](https://arxiv.org/abs/2108.12409)
Both *RoPE* and *ALiBi* argue that it's best to cue the LLM about sentence order directly in the self-attention algorithm as it's there that word tokens are put into relation with each other. More specifically, sentence order should be cued by modifying the \\( \mathbf{QK}^T \\) computation. | 43_5_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | Without going into too many details, *RoPE* notes that positional information can be encoded into query-key pairs, *e.g.* \\( \mathbf{q}_i \\) and \\( \mathbf{x}_j \\) by rotating each vector by an angle \\( \theta * i \\) and \\( \theta * j \\) respectively with \\( i, j \\) describing each vectors sentence position:
$$ \mathbf{\hat{q}}_i^T \mathbf{\hat{x}}_j = \mathbf{{q}}_i^T \mathbf{R}_{\theta, i -j} \mathbf{{x}}_j. $$ | 43_5_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | $$ \mathbf{\hat{q}}_i^T \mathbf{\hat{x}}_j = \mathbf{{q}}_i^T \mathbf{R}_{\theta, i -j} \mathbf{{x}}_j. $$
\\( \mathbf{R}_{\theta, i - j} \\) thereby represents a rotational matrix. \\( \theta \\) is *not* learned during training, but instead set to a pre-defined value that depends on the maximum input sequence length during training. | 43_5_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | > By doing so, the propability score between \\( \mathbf{q}_i \\) and \\( \mathbf{q}_j \\) is only affected if \\( i \ne j \\) and solely depends on the relative distance \\( i - j \\) regardless of each vector's specific positions \\( i \\) and \\( j \\) .
*RoPE* is used in multiple of today's most important LLMs, such as:
- [**Falcon**](https://huggingface.co/tiiuae/falcon-40b)
- [**Llama**](https://arxiv.org/abs/2302.13971)
- [**PaLM**](https://arxiv.org/abs/2204.02311) | 43_5_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | - [**Llama**](https://arxiv.org/abs/2302.13971)
- [**PaLM**](https://arxiv.org/abs/2204.02311)
As an alternative, *ALiBi* proposes a much simpler relative position encoding scheme. The relative distance that input tokens have to each other is added as a negative integer scaled by a pre-defined value `m` to each query-key entry of the \\( \mathbf{QK}^T \\) matrix right before the softmax computation.
 | 43_5_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | 
As shown in the [ALiBi](https://arxiv.org/abs/2108.12409) paper, this simple relative positional encoding allows the model to retain a high performance even at very long text input sequences.
*ALiBi* is used in multiple of today's most important LLMs, such as:
- [**MPT**](https://huggingface.co/mosaicml/mpt-30b)
- [**BLOOM**](https://huggingface.co/bigscience/bloom) | 43_5_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | - [**MPT**](https://huggingface.co/mosaicml/mpt-30b)
- [**BLOOM**](https://huggingface.co/bigscience/bloom)
Both *RoPE* and *ALiBi* position encodings can extrapolate to input lengths not seen during training whereas it has been shown that extrapolation works much better out-of-the-box for *ALiBi* as compared to *RoPE*.
For ALiBi, one simply increases the values of the lower triangular position matrix to match the length of the input sequence. | 43_5_16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.