source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md
https://huggingface.co/docs/transformers/en/big_models/#model-data-type
.md
PyTorch model weights are normally instantiated as torch.float32 and it can be an issue if you try to load a model as a different data type. For example, you'd need twice as much memory to load the weights in torch.float32 and then again to load them in your desired data type, like torch.float16. > [!WARNING] > Due to how PyTorch is designed, the `torch_dtype` parameter only supports floating data types.
52_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md
https://huggingface.co/docs/transformers/en/big_models/#model-data-type
.md
> [!WARNING] > Due to how PyTorch is designed, the `torch_dtype` parameter only supports floating data types. To avoid wasting memory like this, explicitly set the `torch_dtype` parameter to the desired data type or set `torch_dtype="auto"` to load the weights with the most optimal memory pattern (the data type is automatically derived from the model weights). <hfoptions id="dtype"> <hfoption id="specific dtype"> ```py from transformers import AutoModelForCausalLM
52_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md
https://huggingface.co/docs/transformers/en/big_models/#model-data-type
.md
gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", torch_dtype=torch.float16) ``` </hfoption> <hfoption id="auto dtype"> ```py from transformers import AutoModelForCausalLM gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", torch_dtype="auto") ``` </hfoption> </hfoptions> You can also set the data type to use for models instantiated from scratch. ```python import torch from transformers import AutoConfig, AutoModel
52_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md
https://huggingface.co/docs/transformers/en/big_models/#model-data-type
.md
my_config = AutoConfig.from_pretrained("google/gemma-2b", torch_dtype=torch.float16) model = AutoModel.from_config(my_config) ```
52_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
53_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
53_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#efficient-training-on-cpu
.md
This guide focuses on training large models efficiently on CPU.
53_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#mixed-precision-with-ipex
.md
Mixed precision uses single (fp32) and half-precision (bf16/fp16) data types in a model to accelerate training or inference while still preserving much of the single-precision accuracy. Modern CPUs such as 3rd, 4th, and 5th Gen Intel® Xeon® Scalable processors natively support bf16. 6th Gen Intel® Xeon® Scalable processors natively support bf16 and fp16. You should get more performance out of the box by enabling mixed precision training with bf16 or fp16.
53_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#mixed-precision-with-ipex
.md
To further maximize training performance, you can use Intel® Extension for PyTorch (IPEX), which is a library built on PyTorch and adds additional CPU instruction level architecture (ISA) level support such as Intel® Advanced Vector Extensions 512 Vector Neural Network Instructions (Intel® AVX512-VNNI), and Intel® Advanced Matrix Extensions (Intel® AMX) for an extra performance boost on Intel CPUs. However, CPUs with only AVX2 (e.g., AMD or older Intel CPUs) are not guaranteed to have better performance
53_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#mixed-precision-with-ipex
.md
boost on Intel CPUs. However, CPUs with only AVX2 (e.g., AMD or older Intel CPUs) are not guaranteed to have better performance under IPEX.
53_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#mixed-precision-with-ipex
.md
Auto Mixed Precision (AMP) for CPU backends has been enabled since PyTorch 1.10. AMP support for bf16/fp16 on CPUs and bf16/fp16 operator optimization is also supported in IPEX and partially upstreamed to the main PyTorch branch. You can get better performance and user experience with IPEX AMP. Check more detailed information for [Auto Mixed Precision](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html).
53_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#ipex-installation
.md
IPEX release is following PyTorch, to install via pip: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 2.5.0 | 2.5.0+cpu | | 2.4.0 | 2.4.0+cpu | | 2.3.0 | 2.3.0+cpu | | 2.2.0 | 2.2.0+cpu | Please run `pip list | grep torch` to get your `pytorch_version`, so you can get the `IPEX version_name`. ```bash
53_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#ipex-installation
.md
Please run `pip list | grep torch` to get your `pytorch_version`, so you can get the `IPEX version_name`. ```bash pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` You can check the latest versions in [ipex-whl-stable-cpu](https://developer.intel.com/ipex-whl-stable-cpu) if needed. Check more approaches for [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
53_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#usage-in-trainer
.md
To enable auto mixed precision with IPEX in Trainer, users should add `use_ipex`, `bf16` or `fp16`, and `no_cuda` in training command arguments. Take an example of the use cases on [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) - Training with IPEX using BF16 auto mixed precision on CPU: <pre> python examples/pytorch/question-answering/run_qa.py \ --model_name_or_path google-bert/bert-base-uncased \ --dataset_name squad \
53_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#usage-in-trainer
.md
--model_name_or_path google-bert/bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ <b>--use_ipex</b> \ <b>--bf16</b> \ <b>--use_cpu</b></pre> If you want to enable `use_ipex` and `bf16` in your script, add these parameters to `TrainingArguments` like this: ```diff training_args = TrainingArguments( output_dir=args.output_path,
53_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#usage-in-trainer
.md
```diff training_args = TrainingArguments( output_dir=args.output_path, + bf16=True, + use_ipex=True, + use_cpu=True, **kwargs ) ```
53_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu.md
https://huggingface.co/docs/transformers/en/perf_train_cpu/#practice-example
.md
Blog: [Accelerating PyTorch Transformers with Intel Sapphire Rapids](https://huggingface.co/blog/intel-sapphire-rapids)
53_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/
.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
54_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/
.md
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
54_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#model-training-anatomy
.md
To understand performance optimization techniques that one can apply to improve efficiency of model training speed and memory utilization, it's helpful to get familiar with how GPU is utilized during training, and how compute intensity varies depending on an operation performed. Let's start by exploring a motivating example of GPU utilization and the training run of a model. For the demonstration, we'll need to install a few libraries: ```bash pip install transformers datasets accelerate nvidia-ml-py3
54_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#model-training-anatomy
.md
we'll need to install a few libraries: ```bash pip install transformers datasets accelerate nvidia-ml-py3 ``` The `nvidia-ml-py3` library allows us to monitor the memory usage of the models from within Python. You might be familiar with the `nvidia-smi` command in the terminal - this library allows to access the same information in Python directly. Then, we create some dummy data: random token IDs between 100 and 30000 and binary labels for a classifier.
54_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#model-training-anatomy
.md
Then, we create some dummy data: random token IDs between 100 and 30000 and binary labels for a classifier. In total, we get 512 sequences each with length 512 and store them in a [`~datasets.Dataset`] with PyTorch format. ```py >>> import numpy as np >>> from datasets import Dataset
54_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#model-training-anatomy
.md
>>> seq_len, dataset_size = 512, 512 >>> dummy_data = { ... "input_ids": np.random.randint(100, 30000, (dataset_size, seq_len)), ... "labels": np.random.randint(0, 2, (dataset_size)), ... } >>> ds = Dataset.from_dict(dummy_data) >>> ds.set_format("pt") ``` To print summary statistics for the GPU utilization and the training run with the [`Trainer`] we define two helper functions: ```py >>> from pynvml import *
54_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#model-training-anatomy
.md
>>> def print_gpu_utilization(): ... nvmlInit() ... handle = nvmlDeviceGetHandleByIndex(0) ... info = nvmlDeviceGetMemoryInfo(handle) ... print(f"GPU memory occupied: {info.used//1024**2} MB.")
54_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#model-training-anatomy
.md
>>> def print_summary(result): ... print(f"Time: {result.metrics['train_runtime']:.2f}") ... print(f"Samples/second: {result.metrics['train_samples_per_second']:.2f}") ... print_gpu_utilization() ``` Let's verify that we start with a free GPU memory: ```py >>> print_gpu_utilization() GPU memory occupied: 0 MB. ``` That looks good: the GPU memory is not occupied as we would expect before we load any models. If that's not the case on
54_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#model-training-anatomy
.md
``` That looks good: the GPU memory is not occupied as we would expect before we load any models. If that's not the case on your machine make sure to stop all processes that are using GPU memory. However, not all free GPU memory can be used by the user. When a model is loaded to the GPU the kernels are also loaded, which can take up 1-2GB of memory. To see how much it is we load a tiny tensor into the GPU which triggers the kernels to be loaded as well. ```py >>> import torch
54_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#model-training-anatomy
.md
>>> torch.ones((1, 1)).to("cuda") >>> print_gpu_utilization() GPU memory occupied: 1343 MB. ``` We see that the kernels alone take up 1.3GB of GPU memory. Now let's see how much space the model uses.
54_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#load-model
.md
First, we load the `google-bert/bert-large-uncased` model. We load the model weights directly to the GPU so that we can check how much space just the weights use. ```py >>> from transformers import AutoModelForSequenceClassification
54_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#load-model
.md
>>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-large-uncased").to("cuda") >>> print_gpu_utilization() GPU memory occupied: 2631 MB. ``` We can see that the model weights alone take up 1.3 GB of GPU memory. The exact number depends on the specific GPU you are using. Note that on newer GPUs a model can sometimes take up more space since the weights are loaded in an
54_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#load-model
.md
GPU you are using. Note that on newer GPUs a model can sometimes take up more space since the weights are loaded in an optimized fashion that speeds up the usage of the model. Now we can also quickly check if we get the same result as with `nvidia-smi` CLI: ```bash nvidia-smi ``` ```bash Tue Jan 11 08:58:05 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 |
54_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#load-model
.md
| NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================|
54_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#load-model
.md
|===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:04.0 Off | 0 | | N/A 37C P0 39W / 300W | 2631MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
54_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#load-model
.md
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 3721 C ...nvs/codeparrot/bin/python 2629MiB |
54_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#load-model
.md
| 0 N/A N/A 3721 C ...nvs/codeparrot/bin/python 2629MiB | +-----------------------------------------------------------------------------+ ``` We get the same number as before and you can also see that we are using a V100 GPU with 16GB of memory. So now we can start training the model and see how the GPU memory consumption changes. First, we set up a few standard training arguments: ```py default_args = { "output_dir": "tmp", "eval_strategy": "steps", "num_train_epochs": 1,
54_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#load-model
.md
arguments: ```py default_args = { "output_dir": "tmp", "eval_strategy": "steps", "num_train_epochs": 1, "log_level": "error", "report_to": "none", } ``` <Tip> If you plan to run multiple experiments, in order to properly clear the memory between experiments, restart the Python kernel between experiments. </Tip>
54_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#memory-utilization-at-vanilla-training
.md
Let's use the [`Trainer`] and train the model without using any GPU performance optimization techniques and a batch size of 4: ```py >>> from transformers import TrainingArguments, Trainer, logging >>> logging.set_verbosity_error()
54_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#memory-utilization-at-vanilla-training
.md
>>> training_args = TrainingArguments(per_device_train_batch_size=4, **default_args) >>> trainer = Trainer(model=model, args=training_args, train_dataset=ds) >>> result = trainer.train() >>> print_summary(result) ``` ``` Time: 57.82 Samples/second: 8.86 GPU memory occupied: 14949 MB. ``` We see that already a relatively small batch size almost fills up our GPU's entire memory. However, a larger batch size
54_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#memory-utilization-at-vanilla-training
.md
``` We see that already a relatively small batch size almost fills up our GPU's entire memory. However, a larger batch size can often result in faster model convergence or better end performance. So ideally we want to tune the batch size to our model's needs and not to the GPU limitations. What's interesting is that we use much more memory than the size of the model. To understand a bit better why this is the case let's have a look at a model's operations and memory needs.
54_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-operations
.md
Transformers architecture includes 3 main groups of operations grouped below by compute-intensity. 1. **Tensor Contractions** Linear layers and components of Multi-Head Attention all do batched **matrix-matrix multiplications**. These operations are the most compute-intensive part of training a transformer. 2. **Statistical Normalizations**
54_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-operations
.md
2. **Statistical Normalizations** Softmax and layer normalization are less compute-intensive than tensor contractions, and involve one or more **reduction operations**, the result of which is then applied via a map. 3. **Element-wise Operators** These are the remaining operators: **biases, dropout, activations, and residual connections**. These are the least compute-intensive operations. This knowledge can be helpful to know when analyzing performance bottlenecks.
54_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-operations
.md
This knowledge can be helpful to know when analyzing performance bottlenecks. This summary is derived from [Data Movement Is All You Need: A Case Study on Optimizing Transformers 2020](https://arxiv.org/abs/2007.00072)
54_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-memory
.md
We've seen that training the model uses much more memory than just putting the model on the GPU. This is because there are many components during training that use GPU memory. The components on GPU memory are the following: 1. model weights 2. optimizer states 3. gradients 4. forward activations saved for gradient computation 5. temporary buffers 6. functionality-specific memory A typical model trained in mixed precision with AdamW requires 18 bytes per model parameter plus activation memory. For
54_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-memory
.md
A typical model trained in mixed precision with AdamW requires 18 bytes per model parameter plus activation memory. For inference there are no optimizer states and gradients, so we can subtract those. And thus we end up with 6 bytes per model parameter for mixed precision inference, plus activation memory. Let's look at the details. **Model Weights:** - 4 bytes * number of parameters for fp32 training
54_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-memory
.md
Let's look at the details. **Model Weights:** - 4 bytes * number of parameters for fp32 training - 6 bytes * number of parameters for mixed precision training (maintains a model in fp32 and one in fp16 in memory) **Optimizer States:** - 8 bytes * number of parameters for normal AdamW (maintains 2 states) - 2 bytes * number of parameters for 8-bit AdamW optimizers like [bitsandbytes](https://github.com/bitsandbytes-foundation/bitsandbytes)
54_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-memory
.md
- 4 bytes * number of parameters for optimizers like SGD with momentum (maintains only 1 state) **Gradients** - 4 bytes * number of parameters for either fp32 or mixed precision training (gradients are always kept in fp32) **Forward Activations** - size depends on many factors, the key ones being sequence length, hidden size and batch size. There are the input and output that are being passed and returned by the forward and the backward functions and the
54_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-memory
.md
There are the input and output that are being passed and returned by the forward and the backward functions and the forward activations saved for gradient computation. **Temporary Memory** Additionally, there are all kinds of temporary variables which get released once the calculation is done, but in the moment these could require additional memory and could push to OOM. Therefore, when coding it's crucial to think
54_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-memory
.md
moment these could require additional memory and could push to OOM. Therefore, when coding it's crucial to think strategically about such temporary variables and sometimes to explicitly free those as soon as they are no longer needed. **Functionality-specific memory** Then, your software could have special memory needs. For example, when generating text using beam search, the software needs to maintain multiple copies of inputs and outputs. **`forward` vs `backward` Execution Speed**
54_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-memory
.md
needs to maintain multiple copies of inputs and outputs. **`forward` vs `backward` Execution Speed** For convolutions and linear layers there are 2x flops in the backward compared to the forward, which generally translates into ~2x slower (sometimes more, because sizes in the backward tend to be more awkward). Activations are usually bandwidth-limited, and it’s typical for an activation to have to read more data in the backward than in the forward
54_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-memory
.md
bandwidth-limited, and it’s typical for an activation to have to read more data in the backward than in the forward (e.g. activation forward reads once, writes once, activation backward reads twice, gradOutput and output of the forward, and writes once, gradInput). As you can see, there are potentially a few places where we could save GPU memory or speed up operations. Now that you understand what affects GPU utilization and computation speed, refer to
54_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_memory_anatomy.md
https://huggingface.co/docs/transformers/en/model_memory_anatomy/#anatomy-of-models-memory
.md
Now that you understand what affects GPU utilization and computation speed, refer to the [Methods and tools for efficient training on a single GPU](perf_train_gpu_one) documentation page to learn about performance optimization techniques.
54_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
55_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
55_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#perplexity-of-fixed-length-models
.md
[[open-in-colab]] Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note that the metric applies specifically to classical language models (sometimes called autoregressive or causal language models) and is not well defined for masked language models like BERT (see [summary of the models](model_summary)). Perplexity is defined as the exponentiated average negative log-likelihood of a sequence. If we have a tokenized
55_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#perplexity-of-fixed-length-models
.md
Perplexity is defined as the exponentiated average negative log-likelihood of a sequence. If we have a tokenized sequence \\(X = (x_0, x_1, \dots, x_t)\\), then the perplexity of \\(X\\) is, $$\text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\}$$
55_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#perplexity-of-fixed-length-models
.md
where \\(\log p_\theta (x_i|x_{<i})\\) is the log-likelihood of the ith token conditioned on the preceding tokens \\(x_{<i}\\) according to our model. Intuitively, it can be thought of as an evaluation of the model's ability to predict uniformly among the set of specified tokens in a corpus. Importantly, this means that the tokenization procedure has a direct impact on a model's perplexity which should always be taken into consideration when comparing different models.
55_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#perplexity-of-fixed-length-models
.md
This is also equivalent to the exponentiation of the cross-entropy between the data and model predictions. For more intuition about perplexity and its relationship to Bits Per Character (BPC) and data compression, check out this [fantastic blog post on The Gradient](https://thegradient.pub/understanding-evaluation-metrics-for-language-models/).
55_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#calculating-ppl-with-fixed-length-models
.md
If we weren't limited by a model's context size, we would evaluate the model's perplexity by autoregressively factorizing a sequence and conditioning on the entire preceding subsequence at each step, as shown below. <img width="600" alt="Full decomposition of a sequence with unlimited context length" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_full.gif"/>
55_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#calculating-ppl-with-fixed-length-models
.md
When working with approximate models, however, we typically have a constraint on the number of tokens the model can process. The largest version of [GPT-2](model_doc/gpt2), for example, has a fixed length of 1024 tokens, so we cannot calculate \\(p_\theta(x_t|x_{<t})\\) directly when \\(t\\) is greater than 1024. Instead, the sequence is typically broken into subsequences equal to the model's maximum input size. If a model's max
55_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#calculating-ppl-with-fixed-length-models
.md
Instead, the sequence is typically broken into subsequences equal to the model's maximum input size. If a model's max input size is \\(k\\), we then approximate the likelihood of a token \\(x_t\\) by conditioning only on the \\(k-1\\) tokens that precede it rather than the entire context. When evaluating the model's perplexity of a sequence, a tempting but suboptimal approach is to break the sequence into disjoint chunks and add up the decomposed log-likelihoods of each segment independently.
55_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#calculating-ppl-with-fixed-length-models
.md
log-likelihoods of each segment independently. <img width="600" alt="Suboptimal PPL not taking advantage of full available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_chunked.gif"/> This is quick to compute since the perplexity of each segment can be computed in one forward pass, but serves as a poor approximation of the fully-factorized perplexity and will typically yield a higher (worse) PPL because the model will
55_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#calculating-ppl-with-fixed-length-models
.md
approximation of the fully-factorized perplexity and will typically yield a higher (worse) PPL because the model will have less context at most of the prediction steps. Instead, the PPL of fixed-length models should be evaluated with a sliding-window strategy. This involves repeatedly sliding the context window so that the model has more context when making each prediction.
55_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#calculating-ppl-with-fixed-length-models
.md
sliding the context window so that the model has more context when making each prediction. <img width="600" alt="Sliding window PPL taking advantage of all available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_sliding.gif"/> This is a closer approximation to the true decomposition of the sequence probability and will typically yield a more favorable score. The downside is that it requires a separate forward pass for each token in the corpus. A good
55_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#calculating-ppl-with-fixed-length-models
.md
favorable score. The downside is that it requires a separate forward pass for each token in the corpus. A good practical compromise is to employ a strided sliding window, moving the context by larger strides rather than sliding by 1 token a time. This allows computation to proceed much faster while still giving the model a large context to make predictions at each step.
55_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
Let's demonstrate this process with GPT-2. ```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast from accelerate.test_utils.testing import get_backend
55_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) model_id = "openai-community/gpt2-large" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) ``` We'll load in the WikiText-2 dataset and evaluate the perplexity using a few different sliding-window strategies. Since this dataset is small and we're just doing one forward pass over the set, we can just load and encode the entire
55_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
this dataset is small and we're just doing one forward pass over the set, we can just load and encode the entire dataset in memory. ```python from datasets import load_dataset
55_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test") encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt") ``` With 🤗 Transformers, we can simply pass the `input_ids` as the `labels` to our model, and the average negative log-likelihood for each token is returned as the loss. With our sliding window approach, however, there is overlap in the tokens we pass to the model at each iteration. We don't want the log-likelihood for the tokens we're just treating
55_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
the tokens we pass to the model at each iteration. We don't want the log-likelihood for the tokens we're just treating as context to be included in our loss, so we can set these targets to `-100` so that they are ignored. The following is an example of how we could do this with a stride of `512`. This means that the model will have at least 512 tokens for context when calculating the conditional likelihood of any one token (provided there are 512 preceding tokens available to condition on). ```python
55_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
available to condition on). ```python import torch from tqdm import tqdm
55_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size(1) nll_sum = 0.0 n_tokens = 0 prev_end_loc = 0 for begin_loc in tqdm(range(0, seq_len, stride)): end_loc = min(begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # may be different from stride on last loop input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids)
55_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # loss is calculated using CrossEntropyLoss which averages over valid labels # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels # to the left by 1. neg_log_likelihood = outputs.loss
55_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
# Accumulate the total negative log-likelihood and the total number of tokens num_valid_tokens = (target_ids != -100).sum().item() # number of valid tokens in target_ids batch_size = target_ids.size(0) num_loss_tokens = num_valid_tokens - batch_size # subtract batch_size due to internal label shift nll_sum += neg_log_likelihood * num_loss_tokens n_tokens += num_loss_tokens prev_end_loc = end_loc if end_loc == seq_len: break
55_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
avg_nll = nll_sum / n_tokens # average negative log-likelihood per token ppl = torch.exp(avg_nll) ``` Running this with the stride length equal to the max input length is equivalent to the suboptimal, non-sliding-window strategy we discussed above. The smaller the stride, the more context the model will have in making each prediction, and the better the reported perplexity will typically be.
55_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perplexity.md
https://huggingface.co/docs/transformers/en/perplexity/#example-calculating-perplexity-with-gpt-2-in--transformers
.md
and the better the reported perplexity will typically be. When we run the above with `stride = 1024`, i.e. no overlap, the resulting PPL is `19.44`, which is about the same as the `19.93` reported in the GPT-2 paper. By using `stride = 512` and thereby employing our striding window strategy, this jumps down to `16.44`. This is not only a more favorable score, but is calculated in a way that is closer to the true autoregressive decomposition of a sequence likelihood.
55_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
56_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
56_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#agents-and-tools
.md
[[open-in-colab]]
56_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#what-is-an-agent
.md
Large Language Models (LLMs) trained to perform [causal language modeling](./tasks/language_modeling) can tackle a wide range of tasks, but they often struggle with basic tasks like logic, calculation, and search. When prompted in domains in which they do not perform well, they often fail to generate the answer we expect them to. One approach to overcome this weakness is to create an *agent*. An agent is a system that uses an LLM as its engine, and it has access to functions called *tools*.
56_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#what-is-an-agent
.md
An agent is a system that uses an LLM as its engine, and it has access to functions called *tools*. These *tools* are functions for performing a task, and they contain all necessary description for the agent to properly use them. The agent can be programmed to: - devise a series of actions/tools and run them all at once, like the [`CodeAgent`] - plan and execute actions/tools one by one and wait for the outcome of each action before launching the next one, like the [`ReactJsonAgent`]
56_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#code-agent
.md
This agent has a planning step, then generates python code to execute all its actions at once. It natively handles different input and output types for its tools, thus it is the recommended choice for multimodal tasks.
56_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#react-agents
.md
This is the go-to agent to solve reasoning tasks, since the ReAct framework ([Yao et al., 2022](https://huggingface.co/papers/2210.03629)) makes it really efficient to think on the basis of its previous observations. We implement two versions of ReactJsonAgent: - [`ReactJsonAgent`] generates tool calls as a JSON in its output. - [`ReactCodeAgent`] is a new type of ReactJsonAgent that generates its tool calls as blobs of code, which works really well for LLMs that have strong coding performance. > [!TIP]
56_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#react-agents
.md
> [!TIP] > Read [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) blog post to learn more about ReAct agents. <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif" /> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif" /> </div>
56_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#react-agents
.md
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif" /> </div> ![Framework of a React Agent](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png) For example, here is how a ReAct Code agent would work its way through the following question. ```py3 >>> agent.run(
56_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#react-agents
.md
For example, here is how a ReAct Code agent would work its way through the following question. ```py3 >>> agent.run( ... "How many more blocks (also denoted as layers) in BERT base encoder than the encoder from the architecture proposed in Attention is All You Need?", ... ) =====New task===== How many more blocks (also denoted as layers) in BERT base encoder than the encoder from the architecture proposed in Attention is All You Need? ====Agent is executing the code below:
56_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#react-agents
.md
====Agent is executing the code below: bert_blocks = search(query="number of blocks in BERT base encoder") print("BERT blocks:", bert_blocks) ==== Print outputs: BERT blocks: twelve encoder blocks
56_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#react-agents
.md
====Agent is executing the code below: attention_layer = search(query="number of layers in Attention is All You Need") print("Attention layers:", attention_layer) ==== Print outputs: Attention layers: Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position- 2 Page 3 Figure 1: The Transformer - model architecture.
56_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#react-agents
.md
====Agent is executing the code below: bert_blocks = 12 attention_layers = 6 diff = bert_blocks - attention_layers print("Difference in blocks:", diff) final_answer(diff) ==== Print outputs: Difference in blocks: 6 Final answer: 6 ```
56_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
To initialize an agent, you need these arguments: - an LLM to power your agent - the agent is not exactly the LLM, it’s more like the agent is a program that uses an LLM as its engine. - a system prompt: what the LLM engine will be prompted with to generate its output - a toolbox from which the agent pick tools to execute - a parser to extract from the LLM output which tools are to call and with which arguments
56_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
- a parser to extract from the LLM output which tools are to call and with which arguments Upon initialization of the agent system, the tool attributes are used to generate a tool description, then baked into the agent’s `system_prompt` to let it know which tools it can use and why. To start with, please install the `agents` extras in order to install all default dependencies. ```bash pip install transformers[agents] ```
56_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
```bash pip install transformers[agents] ``` Build your LLM engine by defining a `llm_engine` method which accepts a list of [messages](./chat_templating) and returns text. This callable also needs to accept a `stop` argument that indicates when to stop generating. ```python from huggingface_hub import login, InferenceClient
56_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
login("<YOUR_HUGGINGFACEHUB_API_TOKEN>") client = InferenceClient(model="meta-llama/Meta-Llama-3-70B-Instruct")
56_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
def llm_engine(messages, stop_sequences=["Task"]) -> str: response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000) answer = response.choices[0].message.content return answer ``` You could use any `llm_engine` method as long as: 1. it follows the [messages format](./chat_templating) (`List[Dict[str, str]]`) for its input `messages`, and it returns a `str`. 2. it stops generating outputs at the sequences passed in the argument `stop_sequences`
56_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
2. it stops generating outputs at the sequences passed in the argument `stop_sequences` Additionally, `llm_engine` can also take a `grammar` argument. In the case where you specify a `grammar` upon agent initialization, this argument will be passed to the calls to llm_engine, with the `grammar` that you defined upon initialization, to allow [constrained generation](https://huggingface.co/docs/text-generation-inference/conceptual/guidance) in order to force properly-formatted agent outputs.
56_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
You will also need a `tools` argument which accepts a list of `Tools` - it can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`. Now you can create an agent, like [`CodeAgent`], and run it. You can also create a [`TransformersEngine`] with a pre-initialized pipeline to run inference on your local machine using `transformers`.
56_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
For convenience, since agentic behaviours generally require stronger models such as `Llama-3.1-70B-Instruct` that are harder to run locally for now, we also provide the [`HfApiEngine`] class that initializes a `huggingface_hub.InferenceClient` under the hood. ```python from transformers import CodeAgent, HfApiEngine
56_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
llm_engine = HfApiEngine(model="meta-llama/Meta-Llama-3-70B-Instruct") agent = CodeAgent(tools=[], llm_engine=llm_engine, add_base_tools=True) agent.run( "Could you translate this sentence from French, say it out loud and return the audio.", sentence="Où est la boulangerie la plus proche?", ) ``` This will be handy in case of emergency baguette need! You can even leave the argument `llm_engine` undefined, and an [`HfApiEngine`] will be created by default. ```python from transformers import CodeAgent
56_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
agent = CodeAgent(tools=[], add_base_tools=True) agent.run( "Could you translate this sentence from French, say it out loud and give me the audio.", sentence="Où est la boulangerie la plus proche?", ) ``` Note that we used an additional `sentence` argument: you can pass text as additional arguments to the model. You can also use this to indicate the path to local or remote files for the model to use: ```py from transformers import ReactCodeAgent
56_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
agent = ReactCodeAgent(tools=[], llm_engine=llm_engine, add_base_tools=True)
56_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
agent.run("Why does Mike not know many people in New York?", audio="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3") ``` The prompt and output parser were automatically defined, but you can easily inspect them by calling the `system_prompt_template` on your agent. ```python print(agent.system_prompt_template) ``` It's important to explain as clearly as possible the task you want to perform.
56_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md
https://huggingface.co/docs/transformers/en/agents/#how-can-i-build-an-agent
.md
print(agent.system_prompt_template) ``` It's important to explain as clearly as possible the task you want to perform. Every [`~Agent.run`] operation is independent, and since an agent is powered by an LLM, minor variations in your prompt might yield completely different results. You can also run an agent consecutively for different tasks: each time the attributes `agent.task` and `agent.logs` will be re-initialized.
56_5_12