source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | - [DeepSpeed](https://www.deepspeed.ai/tutorials/pipeline/)
- [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) has an internal implementation - no API.
- [Varuna](https://github.com/microsoft/varuna)
- [SageMaker](https://arxiv.org/abs/2111.05972) - this is a proprietary solution that can only be used on AWS.
- [OSLO](https://github.com/tunib-ai/oslo) - this is implemented based on the Hugging Face Transformers. | 36_6_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | - [OSLO](https://github.com/tunib-ai/oslo) - this is implemented based on the Hugging Face Transformers.
🤗 Transformers status: as of this writing none of the models supports full-PP. GPT2 and T5 models have naive MP support.
The main obstacle is being unable to convert the models to `nn.Sequential` and have all the inputs to be Tensors. This
is because currently the models include many features that make the conversion very complicated, and will need to be removed to accomplish that. | 36_6_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | DeepSpeed and Megatron-LM integrations are available in [🤗 Accelerate](https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed)
Other approaches:
DeepSpeed, Varuna and SageMaker use the concept of an [Interleaved Pipeline](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features.html)
<div class="flex justify-center"> | 36_6_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | <div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-sagemaker-interleaved-pipeline.png" alt="Interleaved pipeline execution"/>
</div>
Here the bubble (idle time) is further minimized by prioritizing backward passes. Varuna further attempts to improve the
schedule by using simulations to discover the most efficient scheduling. | 36_6_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | schedule by using simulations to discover the most efficient scheduling.
OSLO has pipeline parallelism implementation based on the Transformers without `nn.Sequential` conversion. | 36_6_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | In Tensor Parallelism, each GPU processes a slice of a tensor and only aggregates the full tensor for operations requiring it.
To describe this method, this section of the guide relies on the concepts and diagrams from the [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
paper: [Efficient Large-Scale Language Model Training on GPU Clusters](https://arxiv.org/abs/2104.04473).
The main building block of any transformer is a fully connected `nn.Linear` followed by a nonlinear activation `GeLU`. | 36_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | The main building block of any transformer is a fully connected `nn.Linear` followed by a nonlinear activation `GeLU`.
The dot dot-product part of it, following the Megatron's paper notation, can be written as `Y = GeLU(XA)`, where `X` is
an input vector, `Y` is the output vector, and `A` is the weight matrix.
If we look at the computation in matrix form, you can see how the matrix multiplication can be split between multiple GPUs:
<div class="flex justify-center"> | 36_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | <div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_gemm.png" alt="Parallel GEMM"/>
</div>
If we split the weight matrix `A` column-wise across `N` GPUs and perform matrix multiplications `XA_1` through `XA_n` in parallel,
then we will end up with `N` output vectors `Y_1, Y_2, ..., Y_n` which can be fed into `GeLU` independently:
<div class="flex justify-center"> | 36_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | <div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-independent-gelu.png" alt="Independent GeLU"/>
</div>
Using this principle, we can update a multi-layer perceptron of arbitrary depth, without the need for any synchronization
between GPUs until the very end, where we need to reconstruct the output vector from shards. The Megatron-LM paper authors
provide a helpful illustration for that: | 36_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | provide a helpful illustration for that:
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_shard_processing.png" alt="Parallel shard processing"/>
</div>
Parallelizing the multi-headed attention layers is even simpler, since they are already inherently parallel, due to having
multiple independent heads!
<div class="flex justify-center"> | 36_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | multiple independent heads!
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_self_attention.png" alt="Parallel self-attention"/>
</div>
Special considerations: TP requires very fast network, and therefore it's not advisable to do TP across more than one node.
Practically, if a node has 4 GPUs, the highest TP degree is therefore 4. If you need a TP degree of 8, you need to use | 36_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | Practically, if a node has 4 GPUs, the highest TP degree is therefore 4. If you need a TP degree of 8, you need to use
nodes that have at least 8 GPUs.
This section is based on the original much more [detailed TP overview](https://github.com/huggingface/transformers/issues/10321#issuecomment-783543530).
by [@anton-l](https://github.com/anton-l).
Alternative names:
- DeepSpeed calls it [tensor slicing](https://www.deepspeed.ai/training/#model-parallelism)
Implementations: | 36_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | - DeepSpeed calls it [tensor slicing](https://www.deepspeed.ai/training/#model-parallelism)
Implementations:
- [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) has an internal implementation, as it's very model-specific
- [parallelformers](https://github.com/tunib-ai/parallelformers) (only inference at the moment)
- [SageMaker](https://arxiv.org/abs/2111.05972) - this is a proprietary solution that can only be used on AWS. | 36_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | - [SageMaker](https://arxiv.org/abs/2111.05972) - this is a proprietary solution that can only be used on AWS.
- [OSLO](https://github.com/tunib-ai/oslo) has the tensor parallelism implementation based on the Transformers.
SageMaker combines TP with DP for a more efficient processing.
🤗 Transformers status:
- core: not yet implemented in the core | 36_7_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | SageMaker combines TP with DP for a more efficient processing.
🤗 Transformers status:
- core: not yet implemented in the core
- but if you want inference [parallelformers](https://github.com/tunib-ai/parallelformers) provides this support for most of our models. So until this is implemented in the core you can use theirs. And hopefully training mode will be supported too. | 36_7_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#tensor-parallelism | .md | - Deepspeed-Inference also supports our BERT, GPT-2, and GPT-Neo models in their super-fast CUDA-kernel-based inference mode, see more [here](https://www.deepspeed.ai/tutorials/inference-tutorial/)
🤗 Accelerate integrates with [TP from Megatron-LM](https://huggingface.co/docs/accelerate/v0.23.0/en/usage_guides/megatron_lm). | 36_7_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#data-parallelism--pipeline-parallelism | .md | The following diagram from the DeepSpeed [pipeline tutorial](https://www.deepspeed.ai/tutorials/pipeline/) demonstrates
how one can combine DP with PP.
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero-dp-pp.png" alt="DP + PP-2d"/>
</div>
Here it's important to see how DP rank 0 doesn't see GPU2 and DP rank 1 doesn't see GPU3. To DP there is just GPUs 0 | 36_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#data-parallelism--pipeline-parallelism | .md | </div>
Here it's important to see how DP rank 0 doesn't see GPU2 and DP rank 1 doesn't see GPU3. To DP there is just GPUs 0
and 1 where it feeds data as if there were just 2 GPUs. GPU0 "secretly" offloads some of its load to GPU2 using PP.
And GPU1 does the same by enlisting GPU3 to its aid.
Since each dimension requires at least 2 GPUs, here you'd need at least 4 GPUs.
Implementations:
- [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) | 36_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#data-parallelism--pipeline-parallelism | .md | Implementations:
- [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
- [Varuna](https://github.com/microsoft/varuna)
- [SageMaker](https://arxiv.org/abs/2111.05972)
- [OSLO](https://github.com/tunib-ai/oslo)
🤗 Transformers status: not yet implemented | 36_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#data-parallelism--pipeline-parallelism--tensor-parallelism | .md | To get an even more efficient training a 3D parallelism is used where PP is combined with TP and DP. This can be seen in the following diagram.
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-deepspeed-3d.png" alt="dp-pp-tp-3d"/>
</div> | 36_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#data-parallelism--pipeline-parallelism--tensor-parallelism | .md | </div>
This diagram is from a blog post [3D parallelism: Scaling to trillion-parameter models](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/), which is a good read as well.
Since each dimension requires at least 2 GPUs, here you'd need at least 8 GPUs.
Implementations:
- [DeepSpeed](https://github.com/microsoft/DeepSpeed) - DeepSpeed also includes an even more efficient DP, which they call ZeRO-DP. | 36_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#data-parallelism--pipeline-parallelism--tensor-parallelism | .md | - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
- [Varuna](https://github.com/microsoft/varuna)
- [SageMaker](https://arxiv.org/abs/2111.05972)
- [OSLO](https://github.com/tunib-ai/oslo)
🤗 Transformers status: not yet implemented, since we have no PP and TP. | 36_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism--pipeline-parallelism--tensor-parallelism | .md | One of the main features of DeepSpeed is ZeRO, which is a super-scalable extension of DP. It has already been
discussed in [ZeRO Data Parallelism](#zero-data-parallelism). Normally it's a standalone feature that doesn't require PP or TP.
But it can be combined with PP and TP.
When ZeRO-DP is combined with PP (and optionally TP) it typically enables only ZeRO stage 1 (optimizer sharding). | 36_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism--pipeline-parallelism--tensor-parallelism | .md | When ZeRO-DP is combined with PP (and optionally TP) it typically enables only ZeRO stage 1 (optimizer sharding).
While it's theoretically possible to use ZeRO stage 2 (gradient sharding) with Pipeline Parallelism, it will have negative
performance impacts. There would need to be an additional reduce-scatter collective for every micro-batch to aggregate
the gradients before sharding, which adds a potentially significant communication overhead. By nature of Pipeline Parallelism, | 36_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism--pipeline-parallelism--tensor-parallelism | .md | the gradients before sharding, which adds a potentially significant communication overhead. By nature of Pipeline Parallelism,
small micro-batches are used and instead the focus is on trying to balance arithmetic intensity (micro-batch size) with
minimizing the Pipeline bubble (number of micro-batches). Therefore those communication costs are going to impact the performance.
In addition, there are already fewer layers than normal due to PP and so the memory savings won't be huge. PP already | 36_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism--pipeline-parallelism--tensor-parallelism | .md | In addition, there are already fewer layers than normal due to PP and so the memory savings won't be huge. PP already
reduces gradient size by ``1/PP``, and so gradient sharding savings on top of that are less significant than pure DP.
ZeRO stage 3 is not a good choice either for the same reason - more inter-node communications required.
And since we have ZeRO, the other benefit is ZeRO-Offload. Since this is stage 1 optimizer states can be offloaded to CPU.
Implementations: | 36_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism--pipeline-parallelism--tensor-parallelism | .md | Implementations:
- [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed) and [Megatron-Deepspeed from BigScience](https://github.com/bigscience-workshop/Megatron-DeepSpeed), which is the fork of the former repo.
- [OSLO](https://github.com/tunib-ai/oslo)
Important papers:
- [Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model](
https://arxiv.org/abs/2201.11990)
🤗 Transformers status: not yet implemented, since we have no PP and TP. | 36_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#flexflow | .md | [FlexFlow](https://github.com/flexflow/FlexFlow) also solves the parallelization problem in a slightly different approach.
Paper: ["Beyond Data and Model Parallelism for Deep Neural Networks" by Zhihao Jia, Matei Zaharia, Alex Aiken](https://arxiv.org/abs/1807.05358)
It performs a sort of 4D Parallelism over Sample-Operator-Attribute-Parameter.
1. Sample = Data Parallelism (sample-wise parallel)
2. Operator = Parallelize a single operation into several sub-operations | 36_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#flexflow | .md | 1. Sample = Data Parallelism (sample-wise parallel)
2. Operator = Parallelize a single operation into several sub-operations
3. Attribute = Data Parallelism (length-wise parallel)
4. Parameter = Model Parallelism (regardless of dimension - horizontal or vertical)
Examples:
* Sample
Let's take 10 batches of sequence length 512. If we parallelize them by sample dimension into 2 devices, we get 10 x 512 which becomes 5 x 2 x 512.
* Operator | 36_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#flexflow | .md | * Operator
If we perform layer normalization, we compute std first and mean second, and then we can normalize data.
Operator parallelism allows computing std and mean in parallel. So if we parallelize them by operator dimension into 2
devices (cuda:0, cuda:1), first we copy input data into both devices, and cuda:0 computes std, cuda:1 computes mean at the same time.
* Attribute | 36_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#flexflow | .md | * Attribute
We have 10 batches of 512 length. If we parallelize them by attribute dimension into 2 devices, 10 x 512 will be 10 x 2 x 256.
* Parameter
It is similar with tensor model parallelism or naive layer-wise model parallelism.
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-flexflow.jpeg" alt="flex-flow-soap"/>
</div> | 36_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#flexflow | .md | </div>
The significance of this framework is that it takes resources like (1) GPU/TPU/CPU vs. (2) RAM/DRAM vs. (3)
fast-intra-connect/slow-inter-connect and it automatically optimizes all these algorithmically deciding which
parallelisation to use where.
One very important aspect is that FlexFlow is designed for optimizing DNN parallelizations for models with static and
fixed workloads, since models with dynamic behavior may prefer different parallelization strategies across iterations. | 36_11_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#flexflow | .md | fixed workloads, since models with dynamic behavior may prefer different parallelization strategies across iterations.
So the promise is very attractive - it runs a 30min simulation on the cluster of choice and it comes up with the best
strategy to utilise this specific environment. If you add/remove/replace any parts it'll run and re-optimize the plan
for that. And then you can train. A different setup will have its own custom optimization. | 36_11_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#flexflow | .md | for that. And then you can train. A different setup will have its own custom optimization.
🤗 Transformers status: Transformers models are FX-trace-able via [transformers.utils.fx](https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py),
which is a prerequisite for FlexFlow, however, changes are required on the FlexFlow side to make it work with Transformers models. | 36_11_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#gpu-selection | .md | When training on multiple GPUs, you can specify the number of GPUs to use and in what order. This can be useful for instance when you have GPUs with different computing power and want to use the faster GPU first. The selection process works for both [DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) and [DataParallel](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html) to use only a subset of the available GPUs, and you don't | 36_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#gpu-selection | .md | to use only a subset of the available GPUs, and you don't need Accelerate or the [DeepSpeed integration](./main_classes/deepspeed). | 36_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#number-of-gpus | .md | For example, if you have 4 GPUs and you only want to use the first 2:
<hfoptions id="select-gpu">
<hfoption id="torchrun">
Use the `--nproc_per_node` to select how many GPUs to use.
```bash
torchrun --nproc_per_node=2 trainer-program.py ...
```
</hfoption>
<hfoption id="Accelerate">
Use `--num_processes` to select how many GPUs to use.
```bash
accelerate launch --num_processes 2 trainer-program.py ...
```
</hfoption>
<hfoption id="DeepSpeed"> | 36_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#number-of-gpus | .md | ```bash
accelerate launch --num_processes 2 trainer-program.py ...
```
</hfoption>
<hfoption id="DeepSpeed">
Use `--num_gpus` to select how many GPUs to use.
```bash
deepspeed --num_gpus 2 trainer-program.py ...
```
</hfoption>
</hfoptions> | 36_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#order-of-gpus | .md | Now, to select which GPUs to use and their order, you'll use the `CUDA_VISIBLE_DEVICES` environment variable. It is easiest to set the environment variable in a `~/bashrc` or another startup config file. `CUDA_VISIBLE_DEVICES` is used to map which GPUs are used. For example, if you have 4 GPUs (0, 1, 2, 3) and you only want to run GPUs 0 and 2:
```bash
CUDA_VISIBLE_DEVICES=0,2 torchrun trainer-program.py ...
``` | 36_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#order-of-gpus | .md | ```bash
CUDA_VISIBLE_DEVICES=0,2 torchrun trainer-program.py ...
```
Only the 2 physical GPUs (0 and 2) are "visible" to PyTorch and these are mapped to `cuda:0` and `cuda:1` respectively. You can also reverse the order of the GPUs to use 2 first. Now, the mapping is `cuda:1` for GPU 0 and `cuda:0` for GPU 2.
```bash
CUDA_VISIBLE_DEVICES=2,0 torchrun trainer-program.py ...
```
You can also set the `CUDA_VISIBLE_DEVICES` environment variable to an empty value to create an environment without GPUs. | 36_14_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#order-of-gpus | .md | You can also set the `CUDA_VISIBLE_DEVICES` environment variable to an empty value to create an environment without GPUs.
```bash
CUDA_VISIBLE_DEVICES= python trainer-program.py ...
```
<Tip warning={true}> | 36_14_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#order-of-gpus | .md | ```bash
CUDA_VISIBLE_DEVICES= python trainer-program.py ...
```
<Tip warning={true}>
As with any environment variable, they can be exported instead of being added to the command line. However, this is not recommended because it can be confusing if you forget how the environment variable was setup and you end up using the wrong GPUs. Instead, it is common practice to set the environment variable for a specific training run on the same command line.
</Tip> | 36_14_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#order-of-gpus | .md | </Tip>
`CUDA_DEVICE_ORDER` is an alternative environment variable you can use to control how the GPUs are ordered. You can either order them by:
1. PCIe bus ID's that matches the order of [`nvidia-smi`](https://developer.nvidia.com/nvidia-system-management-interface) and [`rocm-smi`](https://rocm.docs.amd.com/projects/rocm_smi_lib/en/latest/.doxygen/docBin/html/index.html) for NVIDIA and AMD GPUs respectively
```bash
export CUDA_DEVICE_ORDER=PCI_BUS_ID
```
2. GPU compute ability
```bash | 36_14_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#order-of-gpus | .md | ```bash
export CUDA_DEVICE_ORDER=PCI_BUS_ID
```
2. GPU compute ability
```bash
export CUDA_DEVICE_ORDER=FASTEST_FIRST
``` | 36_14_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#order-of-gpus | .md | ```bash
export CUDA_DEVICE_ORDER=FASTEST_FIRST
```
The `CUDA_DEVICE_ORDER` is especially useful if your training setup consists of an older and newer GPU, where the older GPU appears first, but you cannot physically swap the cards to make the newer GPU appear first. In this case, set `CUDA_DEVICE_ORDER=FASTEST_FIRST` to always use the newer and faster GPU first (`nvidia-smi` or `rocm-smi` still reports the GPUs in their PCIe order). Or you could also set `export CUDA_VISIBLE_DEVICES=1,0`. | 36_14_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 37_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 37_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#introduction | .md | An increasingly common use case for LLMs is **chat**. In a chat context, rather than continuing a single string
of text (as is the case with a standard language model), the model instead continues a conversation that consists
of one or more **messages**, each of which includes a **role**, like "user" or "assistant", as well as message text.
Much like tokenization, different models expect very different input formats for chat. This is the reason we added | 37_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#introduction | .md | Much like tokenization, different models expect very different input formats for chat. This is the reason we added
**chat templates** as a feature. Chat templates are part of the tokenizer for text-only LLMs or processor for multimodal LLMs. They specify how to convert conversations,
represented as lists of messages, into a single tokenizable string in the format that the model expects.
Let's make this concrete with a quick example using the `mistralai/Mistral-7B-Instruct-v0.1` model:
```python | 37_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#introduction | .md | Let's make this concrete with a quick example using the `mistralai/Mistral-7B-Instruct-v0.1` model:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") | 37_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#introduction | .md | >>> chat = [
... {"role": "user", "content": "Hello, how are you?"},
... {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
... {"role": "user", "content": "I'd like to show off how chat templating works!"},
... ] | 37_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#introduction | .md | >>> tokenizer.apply_chat_template(chat, tokenize=False)
"<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]"
```
Notice how the tokenizer has added the control tokens [INST] and [/INST] to indicate the start and end of
user messages (but not assistant messages!), and the entire chat is condensed into a single string. | 37_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#introduction | .md | user messages (but not assistant messages!), and the entire chat is condensed into a single string.
If we use `tokenize=True`, which is the default setting, that string will also be tokenized for us.
Now, try the same code, but swap in the `HuggingFaceH4/zephyr-7b-beta` model instead, and you should get:
```text
<|user|>
Hello, how are you?</s>
<|assistant|>
I'm doing great. How can I help you today?</s>
<|user|>
I'd like to show off how chat templating works!</s>
``` | 37_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#introduction | .md | <|assistant|>
I'm doing great. How can I help you today?</s>
<|user|>
I'd like to show off how chat templating works!</s>
```
Both Zephyr and Mistral-Instruct were fine-tuned from the same base model, `Mistral-7B-v0.1`. However, they were trained
with totally different chat formats. Without chat templates, you would have to write manual formatting code for each
model, and it's very easy to make minor errors that hurt performance! Chat templates handle the details of formatting | 37_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#introduction | .md | model, and it's very easy to make minor errors that hurt performance! Chat templates handle the details of formatting
for you, allowing you to write universal code that works for any model. | 37_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#how-do-i-use-chat-templates | .md | As you can see in the example above, chat templates are easy to use. Simply build a list of messages, with `role`
and `content` keys, and then pass it to the [`~PreTrainedTokenizer.apply_chat_template`] or [`~ProcessorMixin.apply_chat_template`] method
depending on what type of model you are using. Once you do that,
you'll get output that's ready to go! When using chat templates as input for model generation, it's also a good idea | 37_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#how-do-i-use-chat-templates | .md | you'll get output that's ready to go! When using chat templates as input for model generation, it's also a good idea
to use `add_generation_prompt=True` to add a [generation prompt](#what-are-generation-prompts). | 37_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-text-only-llms | .md | Here's an example of preparing input for `model.generate()`, using `Zephyr` again:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceH4/zephyr-7b-beta"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint) # You may want to use bfloat16 and/or move to GPU here | 37_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-text-only-llms | .md | messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
print(tokenizer.decode(tokenized_chat[0]))
```
This will yield a string in the input format that Zephyr expects.
```text
<|system|> | 37_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-text-only-llms | .md | ```
This will yield a string in the input format that Zephyr expects.
```text
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
```
Now that our input is formatted correctly for Zephyr, we can use the model to generate a response to the user's question:
```python
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
```
This will yield: | 37_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-text-only-llms | .md | outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
```
This will yield:
```text
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|> | 37_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-text-only-llms | .md | <|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all.
``` | 37_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-multimodal-llms | .md | For multimodal LLMs such as [LLaVA](https://huggingface.co/llava-hf) the prompts can be formatted in a similar way. The only difference is you need to pass input images/videos as well along with the text. Each `"content"`
has to be a list containing either a text or an image/video.
Here's an example of preparing input for using `LLaVA` model:
```python
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration | 37_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-multimodal-llms | .md | model_id = "llava-hf/llava-onevision-qwen2-0.5b-ov-hf"
model = LlavaOnevisionForConditionalGeneration.from_pretrained(model_id) # You may want to use bfloat16 and/or move to GPU here
processor = AutoProcessor.from_pretrained(model_id) | 37_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-multimodal-llms | .md | messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a friendly chatbot who always responds in the style of a pirate"}],
},
{
"role": "user",
"content": [
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
{"type": "text", "text": "What are these?"},
],
},
] | 37_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-multimodal-llms | .md | processed_chat = processor.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_dict=True, return_tensors="pt")
print(processor.batch_decode(processed_chat["input_ids"][:, :30]))
```
This yields a string in LLaVAs expected input format with many `<image>` tokens at the end. | 37_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-multimodal-llms | .md | ```
This yields a string in LLaVAs expected input format with many `<image>` tokens at the end.
The `<image>` tokens are placeholders and each one will be replaced by image embeddings when the mode is run in the forward call. The `processed_chat` can be further passed into [`~GenerationMixin.generate`] to generate text.
```text
'<|im_start|>system
You are a friendly chatbot who always responds in the style of a pirate<|im_end|><|im_start|>user <image><image><image><image><image><image><image><image>'
``` | 37_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#usage-with-multimodal-llms | .md | ```
Arr, 'twas easy after all! | 37_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#is-there-an-automated-pipeline-for-chat | .md | Yes, there is! Our text generation pipelines support chat inputs, which makes it easy to use chat models. In the past,
we used to use a dedicated "ConversationalPipeline" class, but this has now been deprecated and its functionality
has been merged into the [`TextGenerationPipeline`]. Let's try the `Zephyr` example again, but this time using
a pipeline:
```python
from transformers import pipeline | 37_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#is-there-an-automated-pipeline-for-chat | .md | pipe = pipeline("text-generation", "HuggingFaceH4/zephyr-7b-beta")
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
print(pipe(messages, max_new_tokens=128)[0]['generated_text'][-1]) # Print the assistant's response
```
```text | 37_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#is-there-an-automated-pipeline-for-chat | .md | ]
print(pipe(messages, max_new_tokens=128)[0]['generated_text'][-1]) # Print the assistant's response
```
```text
{'role': 'assistant', 'content': "Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all."}
``` | 37_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#is-there-an-automated-pipeline-for-chat | .md | ```
The pipeline will take care of all the details of tokenization and calling `apply_chat_template` for you -
once the model has a chat template, all you need to do is initialize the pipeline and pass it the list of messages! | 37_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-are-generation-prompts | .md | You may have noticed that the `apply_chat_template` method has an `add_generation_prompt` argument. This argument tells
the template to add tokens that indicate the start of a bot response. For example, consider the following chat:
```python
messages = [
{"role": "user", "content": "Hi there!"},
{"role": "assistant", "content": "Nice to meet you!"},
{"role": "user", "content": "Can I ask a question?"}
]
``` | 37_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-are-generation-prompts | .md | {"role": "assistant", "content": "Nice to meet you!"},
{"role": "user", "content": "Can I ask a question?"}
]
```
Here's what this will look like without a generation prompt, for a model that uses standard "ChatML" formatting:
```python
tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False)
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
"""
``` | 37_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-are-generation-prompts | .md | <|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
"""
```
And here's what it looks like **with** a generation prompt:
```python
tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
``` | 37_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-are-generation-prompts | .md | Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
```
Note that this time, we've added the tokens that indicate the start of a bot response. This ensures that when the model
generates text it will write a bot response instead of doing something unexpected, like continuing the user's
message. Remember, chat models are still just language models - they're trained to continue text, and chat is just a | 37_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-are-generation-prompts | .md | message. Remember, chat models are still just language models - they're trained to continue text, and chat is just a
special kind of text to them! You need to guide them with appropriate control tokens, so they know what they're
supposed to be doing.
Not all models require generation prompts. Some models, like LLaMA, don't have any
special tokens before bot responses. In these cases, the `add_generation_prompt` argument will have no effect. The exact | 37_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-are-generation-prompts | .md | special tokens before bot responses. In these cases, the `add_generation_prompt` argument will have no effect. The exact
effect that `add_generation_prompt` has will depend on the template being used. | 37_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-does-continuefinalmessage-do | .md | When passing a list of messages to `apply_chat_template` or `TextGenerationPipeline`, you can choose
to format the chat so the model will continue the final message in the chat instead of starting a new one. This is done
by removing any end-of-sequence tokens that indicate the end of the final message, so that the model will simply
extend the final message when it begins to generate text. This is useful for "prefilling" the model's response.
Here's an example:
```python
chat = [ | 37_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-does-continuefinalmessage-do | .md | Here's an example:
```python
chat = [
{"role": "user", "content": "Can you format the answer in JSON?"},
{"role": "assistant", "content": '{"name": "'},
] | 37_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-does-continuefinalmessage-do | .md | formatted_chat = tokenizer.apply_chat_template(chat, tokenize=True, return_dict=True, continue_final_message=True)
model.generate(**formatted_chat)
```
The model will generate text that continues the JSON string, rather than starting a new message. This approach
can be very useful for improving the accuracy of the model's instruction-following when you know how you want
it to start its replies. | 37_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-does-continuefinalmessage-do | .md | it to start its replies.
Because `add_generation_prompt` adds the tokens that start a new message, and `continue_final_message` removes any
end-of-message tokens from the final message, it does not make sense to use them together. As a result, you'll
get an error if you try!
<Tip>
The default behaviour of `TextGenerationPipeline` is to set `add_generation_prompt=True` so that it starts a new | 37_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#what-does-continuefinalmessage-do | .md | <Tip>
The default behaviour of `TextGenerationPipeline` is to set `add_generation_prompt=True` so that it starts a new
message. However, if the final message in the input chat has the "assistant" role, it will assume that this message is
a prefill and switch to `continue_final_message=True` instead, because most models do not support multiple
consecutive assistant messages. You can override this behaviour by explicitly passing the `continue_final_message`
argument when calling the pipeline.
</Tip> | 37_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#can-i-use-chat-templates-in-training | .md | Yes! This is a good way to ensure that the chat template matches the tokens the model sees during training.
We recommend that you apply the chat template as a preprocessing step for your dataset. After this, you
can simply continue like any other language model training task. When training, you should usually set
`add_generation_prompt=False`, because the added tokens to prompt an assistant response will not be helpful during
training. Let's see an example:
```python
from transformers import AutoTokenizer | 37_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#can-i-use-chat-templates-in-training | .md | training. Let's see an example:
```python
from transformers import AutoTokenizer
from datasets import Dataset | 37_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#can-i-use-chat-templates-in-training | .md | tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-beta")
chat1 = [
{"role": "user", "content": "Which is bigger, the moon or the sun?"},
{"role": "assistant", "content": "The sun."}
]
chat2 = [
{"role": "user", "content": "Which is bigger, a virus or a bacterium?"},
{"role": "assistant", "content": "A bacterium."}
] | 37_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#can-i-use-chat-templates-in-training | .md | dataset = Dataset.from_dict({"chat": [chat1, chat2]})
dataset = dataset.map(lambda x: {"formatted_chat": tokenizer.apply_chat_template(x["chat"], tokenize=False, add_generation_prompt=False)})
print(dataset['formatted_chat'][0])
```
And we get:
```text
<|user|>
Which is bigger, the moon or the sun?</s>
<|assistant|>
The sun.</s>
```
From here, just continue training like you would with a standard language modelling task, using the `formatted_chat` column.
<Tip> | 37_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#can-i-use-chat-templates-in-training | .md | <Tip>
By default, some tokenizers add special tokens like `<bos>` and `<eos>` to text they tokenize. Chat templates should
already include all the special tokens they need, and so additional special tokens will often be incorrect or
duplicated, which will hurt model performance.
Therefore, if you format text with `apply_chat_template(tokenize=False)`, you should set the argument | 37_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#can-i-use-chat-templates-in-training | .md | Therefore, if you format text with `apply_chat_template(tokenize=False)`, you should set the argument
`add_special_tokens=False` when you tokenize that text later. If you use `apply_chat_template(tokenize=True)`, you don't need to worry about this!
</Tip> | 37_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#advanced-extra-inputs-to-chat-templates | .md | The only argument that `apply_chat_template` requires is `messages`. However, you can pass any keyword
argument to `apply_chat_template` and it will be accessible inside the template. This gives you a lot of freedom to use
chat templates for many things. There are no restrictions on the names or the format of these arguments - you can pass
strings, lists, dicts or whatever else you want.
That said, there are some common use-cases for these extra arguments, | 37_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#advanced-extra-inputs-to-chat-templates | .md | strings, lists, dicts or whatever else you want.
That said, there are some common use-cases for these extra arguments,
such as passing tools for function calling, or documents for retrieval-augmented generation. In these common cases,
we have some opinionated recommendations about what the names and formats of these arguments should be, which are
described in the sections below. We encourage model authors to make their chat templates compatible with this format, | 37_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#advanced-extra-inputs-to-chat-templates | .md | described in the sections below. We encourage model authors to make their chat templates compatible with this format,
to make it easy to transfer tool-calling code between models. | 37_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#advanced-tool-use--function-calling | .md | "Tool use" LLMs can choose to call functions as external tools before generating an answer. When passing tools
to a tool-use model, you can simply pass a list of functions to the `tools` argument:
```python
import datetime
def current_time():
"""Get the current local time as a string."""
return str(datetime.now())
def multiply(a: float, b: float):
"""
A function that multiplies two numbers
Args:
a: The first number to multiply
b: The second number to multiply
"""
return a * b | 37_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#advanced-tool-use--function-calling | .md | Args:
a: The first number to multiply
b: The second number to multiply
"""
return a * b
tools = [current_time, multiply] | 37_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#advanced-tool-use--function-calling | .md | model_input = tokenizer.apply_chat_template(
messages,
tools=tools
)
```
In order for this to work correctly, you should write your functions in the format above, so that they can be parsed
correctly as tools. Specifically, you should follow these rules:
- The function should have a descriptive name
- Every argument must have a type hint
- The function must have a docstring in the standard Google style (in other words, an initial function description | 37_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#advanced-tool-use--function-calling | .md | - The function must have a docstring in the standard Google style (in other words, an initial function description
followed by an `Args:` block that describes the arguments, unless the function does not have any arguments.
- Do not include types in the `Args:` block. In other words, write `a: The first number to multiply`, not
`a (int): The first number to multiply`. Type hints should go in the function header instead. | 37_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#advanced-tool-use--function-calling | .md | `a (int): The first number to multiply`. Type hints should go in the function header instead.
- The function can have a return type and a `Returns:` block in the docstring. However, these are optional
because most tool-use models ignore them. | 37_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#passing-tool-results-to-the-model | .md | The sample code above is enough to list the available tools for your model, but what happens if it wants to actually use
one? If that happens, you should:
1. Parse the model's output to get the tool name(s) and arguments.
2. Add the model's tool call(s) to the conversation.
3. Call the corresponding function(s) with those arguments.
4. Add the result(s) to the conversation | 37_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#a-complete-tool-use-example | .md | Let's walk through a tool use example, step by step. For this example, we will use an 8B `Hermes-2-Pro` model,
as it is one of the highest-performing tool-use models in its size category at the time of writing. If you have the
memory, you can consider using a larger model instead like [Command-R](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
or [Mixtral-8x22B](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1), both of which also support tool use
and offer even stronger performance. | 37_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/chat_templating.md | https://huggingface.co/docs/transformers/en/chat_templating/#a-complete-tool-use-example | .md | and offer even stronger performance.
First, let's load our model and tokenizer:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer | 37_12_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.