source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#wordpiece | .md | merged if the probability of `"ug"` divided by `"u"`, `"g"` would have been greater than for any other symbol
pair. Intuitively, WordPiece is slightly different to BPE in that it evaluates what it _loses_ by merging two symbols
to ensure it's _worth it_.
<a id='unigram'></a> | 40_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#unigram | .md | Unigram is a subword tokenization algorithm introduced in [Subword Regularization: Improving Neural Network Translation
Models with Multiple Subword Candidates (Kudo, 2018)](https://arxiv.org/pdf/1804.10959.pdf). In contrast to BPE or
WordPiece, Unigram initializes its base vocabulary to a large number of symbols and progressively trims down each
symbol to obtain a smaller vocabulary. The base vocabulary could for instance correspond to all pre-tokenized words and | 40_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#unigram | .md | symbol to obtain a smaller vocabulary. The base vocabulary could for instance correspond to all pre-tokenized words and
the most common substrings. Unigram is not used directly for any of the models in the transformers, but it's used in
conjunction with [SentencePiece](#sentencepiece).
At each training step, the Unigram algorithm defines a loss (often defined as the log-likelihood) over the training | 40_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#unigram | .md | At each training step, the Unigram algorithm defines a loss (often defined as the log-likelihood) over the training
data given the current vocabulary and a unigram language model. Then, for each symbol in the vocabulary, the algorithm
computes how much the overall loss would increase if the symbol was to be removed from the vocabulary. Unigram then
removes p (with p usually being 10% or 20%) percent of the symbols whose loss increase is the lowest, *i.e.* those | 40_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#unigram | .md | removes p (with p usually being 10% or 20%) percent of the symbols whose loss increase is the lowest, *i.e.* those
symbols that least affect the overall loss over the training data. This process is repeated until the vocabulary has
reached the desired size. The Unigram algorithm always keeps the base characters so that any word can be tokenized.
Because Unigram is not based on merge rules (in contrast to BPE and WordPiece), the algorithm has several ways of | 40_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#unigram | .md | Because Unigram is not based on merge rules (in contrast to BPE and WordPiece), the algorithm has several ways of
tokenizing new text after training. As an example, if a trained Unigram tokenizer exhibits the vocabulary:
```
["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"],
```
`"hugs"` could be tokenized both as `["hug", "s"]`, `["h", "ug", "s"]` or `["h", "u", "g", "s"]`. So which one | 40_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#unigram | .md | ```
`"hugs"` could be tokenized both as `["hug", "s"]`, `["h", "ug", "s"]` or `["h", "u", "g", "s"]`. So which one
to choose? Unigram saves the probability of each token in the training corpus on top of saving the vocabulary so that
the probability of each possible tokenization can be computed after training. The algorithm simply picks the most
likely tokenization in practice, but also offers the possibility to sample a possible tokenization according to their
probabilities. | 40_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#unigram | .md | probabilities.
Those probabilities are defined by the loss the tokenizer is trained on. Assuming that the training data consists of
the words \\(x_{1}, \dots, x_{N}\\) and that the set of all possible tokenizations for a word \\(x_{i}\\) is
defined as \\(S(x_{i})\\), then the overall loss is defined as
$$\mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right )$$
<a id='sentencepiece'></a> | 40_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#sentencepiece | .md | All tokenization algorithms described so far have the same problem: It is assumed that the input text uses spaces to
separate words. However, not all languages use spaces to separate words. One possible solution is to use language
specific pre-tokenizers, *e.g.* [XLM](model_doc/xlm) uses a specific Chinese, Japanese, and Thai pre-tokenizer.
To solve this problem more generally, [SentencePiece: A simple and language independent subword tokenizer and | 40_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#sentencepiece | .md | To solve this problem more generally, [SentencePiece: A simple and language independent subword tokenizer and
detokenizer for Neural Text Processing (Kudo et al., 2018)](https://arxiv.org/pdf/1808.06226.pdf) treats the input
as a raw input stream, thus including the space in the set of characters to use. It then uses the BPE or unigram
algorithm to construct the appropriate vocabulary.
The [`XLNetTokenizer`] uses SentencePiece for example, which is also why in the example earlier the | 40_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#sentencepiece | .md | The [`XLNetTokenizer`] uses SentencePiece for example, which is also why in the example earlier the
`"▁"` character was included in the vocabulary. Decoding with SentencePiece is very easy since all tokens can just be
concatenated and `"▁"` is replaced by a space.
All transformers models in the library that use SentencePiece use it in combination with unigram. Examples of models | 40_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#sentencepiece | .md | All transformers models in the library that use SentencePiece use it in combination with unigram. Examples of models
using SentencePiece are [ALBERT](model_doc/albert), [XLNet](model_doc/xlnet), [Marian](model_doc/marian), and [T5](model_doc/t5). | 40_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 41_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 41_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed | .md | [DeepSpeed](https://www.deepspeed.ai/) is a PyTorch optimization library that makes distributed training memory-efficient and fast. At its core is the [Zero Redundancy Optimizer (ZeRO)](https://hf.co/papers/1910.02054) which enables training large models at scale. ZeRO works in several stages:
* ZeRO-1, optimizer state partitioning across GPUs
* ZeRO-2, gradient partitioning across GPUs
* ZeRO-3, parameter partitioning across GPUs | 41_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed | .md | * ZeRO-3, parameter partitioning across GPUs
In GPU-limited environments, ZeRO also enables offloading optimizer memory and computation from the GPU to the CPU to fit and train really large models on a single GPU. DeepSpeed is integrated with the Transformers [`Trainer`] class for all ZeRO stages and offloading. All you need to do is provide a config file or you can use a provided template. For inference, Transformers support ZeRO-3 and offloading since it allows loading huge models. | 41_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed | .md | This guide will walk you through how to deploy DeepSpeed training, the features you can enable, how to setup the config files for different ZeRO stages, offloading, inference, and using DeepSpeed without the [`Trainer`]. | 41_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#installation | .md | DeepSpeed is available to install from PyPI or Transformers (for more detailed installation options, take a look at the DeepSpeed [installation details](https://www.deepspeed.ai/tutorials/advanced-install/) or the GitHub [README](https://github.com/microsoft/deepspeed#installation)).
<Tip> | 41_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#installation | .md | <Tip>
If you're having difficulties installing DeepSpeed, check the [DeepSpeed CUDA installation](../debugging#deepspeed-cuda-installation) guide. While DeepSpeed has a pip installable PyPI package, it is highly recommended to [install it from source](https://www.deepspeed.ai/tutorials/advanced-install/#install-deepspeed-from-source) to best match your hardware and to support certain features, like 1-bit Adam, which aren’t available in the PyPI distribution.
</Tip>
<hfoptions id="install"> | 41_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#installation | .md | </Tip>
<hfoptions id="install">
<hfoption id="PyPI">
```bash
pip install deepspeed
```
</hfoption>
<hfoption id="Transformers">
```bash
pip install transformers[deepspeed]
```
</hfoption>
</hfoptions> | 41_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#memory-requirements | .md | Before you begin, it is a good idea to check whether you have enough GPU and CPU memory to fit your model. DeepSpeed provides a tool for estimating the required CPU/GPU memory. For example, to estimate the memory requirements for the [bigscience/T0_3B](bigscience/T0_3B) model on a single GPU:
```bash
$ python -c 'from transformers import AutoModel; \
from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \
model = AutoModel.from_pretrained("bigscience/T0_3B"); \ | 41_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#memory-requirements | .md | model = AutoModel.from_pretrained("bigscience/T0_3B"); \
estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 1 GPU per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0 | 41_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#memory-requirements | .md | 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.37GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=1
15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0
``` | 41_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#memory-requirements | .md | 15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0
```
This means you either need a single 80GB GPU without CPU offload or a 8GB GPU and a ~60GB CPU to offload to (these are just the memory requirements for the parameters, optimizer states and gradients, and you'll need a bit more for the CUDA kernels and activations). You should also consider the tradeoff between cost and speed because it'll be cheaper to rent or buy a smaller GPU but it'll take longer to train your model. | 41_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#memory-requirements | .md | If you have enough GPU memory make sure you disable CPU/NVMe offload to make everything faster. | 41_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#select-a-zero-stage | .md | After you've installed DeepSpeed and have a better idea of your memory requirements, the next step is selecting a ZeRO stage to use. In order of fastest and most memory-efficient:
| Fastest | Memory efficient |
|------------------|------------------|
| ZeRO-1 | ZeRO-3 + offload |
| ZeRO-2 | ZeRO-3 |
| ZeRO-2 + offload | ZeRO-2 + offload |
| ZeRO-3 | ZeRO-2 |
| ZeRO-3 + offload | ZeRO-1 | | 41_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#select-a-zero-stage | .md | | ZeRO-2 + offload | ZeRO-2 + offload |
| ZeRO-3 | ZeRO-2 |
| ZeRO-3 + offload | ZeRO-1 |
To find what works best for you, start with the fastest approach and if you run out of memory, try the next stage which is slower but more memory efficient. Feel free to work in whichever direction you prefer (starting with the most memory efficient or fastest) to discover the appropriate balance between speed and memory usage. | 41_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#select-a-zero-stage | .md | A general process you can use is (start with batch size of 1):
1. enable gradient checkpointing
2. try ZeRO-2
3. try ZeRO-2 and offload the optimizer
4. try ZeRO-3
5. try ZeRO-3 and offload parameters to the CPU
6. try ZeRO-3 and offload parameters and the optimizer to the CPU
7. try lowering various default values like a narrower search beam if you're using the [`~GenerationMixin.generate`] method | 41_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#select-a-zero-stage | .md | 7. try lowering various default values like a narrower search beam if you're using the [`~GenerationMixin.generate`] method
8. try mixed half-precision (fp16 on older GPU architectures and bf16 on Ampere) over full-precision weights
9. add more hardware if possible or enable Infinity to offload parameters and the optimizer to a NVMe
10. once you're not running out of memory, measure effective throughput and then try to increase the batch size as large as you can to maximize GPU efficiency | 41_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#select-a-zero-stage | .md | 11. lastly, try to optimize your training setup by disabling some offload features or use a faster ZeRO stage and increasing/decreasing the batch size to find the best tradeoff between speed and memory usage | 41_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed-configuration-file | .md | DeepSpeed works with the [`Trainer`] class by way of a config file containing all the parameters for configuring how you want setup your training run. When you execute your training script, DeepSpeed logs the configuration it received from [`Trainer`] to the console so you can see exactly what configuration was used.
<Tip> | 41_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed-configuration-file | .md | <Tip>
Find a complete list of DeepSpeed configuration options on the [DeepSpeed Configuration JSON](https://www.deepspeed.ai/docs/config-json/) reference. You can also find more practical examples of various DeepSpeed configuration examples on the [DeepSpeedExamples](https://github.com/microsoft/DeepSpeedExamples) repository or the main [DeepSpeed](https://github.com/microsoft/DeepSpeed) repository. To quickly find specific examples, you can:
```bash | 41_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed-configuration-file | .md | ```bash
git clone https://github.com/microsoft/DeepSpeedExamples
cd DeepSpeedExamples
find . -name '*json'
# find examples with the Lamb optimizer
grep -i Lamb $(find . -name '*json')
```
</Tip>
The DeepSpeed configuration file is passed as a path to a JSON file if you're training from the command line interface or as a nested `dict` object if you're using the [`Trainer`] in a notebook setting.
<hfoptions id="pass-config">
<hfoption id="path to file">
```py | 41_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed-configuration-file | .md | <hfoptions id="pass-config">
<hfoption id="path to file">
```py
TrainingArguments(..., deepspeed="path/to/deepspeed_config.json")
```
</hfoption>
<hfoption id="nested dict">
```py
ds_config_dict = dict(scheduler=scheduler_params, optimizer=optimizer_params)
args = TrainingArguments(..., deepspeed=ds_config_dict)
trainer = Trainer(model, args, ...)
```
</hfoption>
</hfoptions> | 41_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed-and-trainer-parameters | .md | There are three types of configuration parameters:
1. Some of the configuration parameters are shared by [`Trainer`] and DeepSpeed, and it can be difficult to identify errors when there are conflicting definitions. To make it easier, these shared configuration parameters are configured from the [`Trainer`] command line arguments. | 41_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed-and-trainer-parameters | .md | 2. Some configuration parameters that are automatically derived from the model configuration so you don't need to manually adjust these values. The [`Trainer`] uses a configuration value `auto` to determine set the most correct or efficient value. You could set your own configuration parameters explicitly, but you must take care to ensure the [`Trainer`] arguments and DeepSpeed configuration parameters agree. Mismatches may cause the training to fail in very difficult to detect ways! | 41_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed-and-trainer-parameters | .md | 3. Some configuration parameters specific to DeepSpeed only which need to be manually set based on your training needs.
You could also modify the DeepSpeed configuration and edit [`TrainingArguments`] from it:
1. Create or load a DeepSpeed configuration to use as the main configuration
2. Create a [`TrainingArguments`] object based on these DeepSpeed configuration values
Some values, such as `scheduler.params.total_num_steps` are calculated by the [`Trainer`] during training. | 41_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | There are three configurations, each corresponding to a different ZeRO stage. Stage 1 is not as interesting for scalability, and this guide focuses on stages 2 and 3. The `zero_optimization` configuration contains all the options for what to enable and how to configure them. For a more detailed explanation of each parameter, take a look at the [DeepSpeed Configuration JSON](https://www.deepspeed.ai/docs/config-json/) reference.
<Tip warning={true}> | 41_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | <Tip warning={true}>
DeepSpeed doesn’t validate parameter names and any typos fallback on the parameter's default setting. You can watch the DeepSpeed engine startup log messages to see what values it is going to use.
</Tip>
The following configurations must be setup with DeepSpeed because the [`Trainer`] doesn't provide equivalent command line arguments.
<hfoptions id="zero-config">
<hfoption id="ZeRO-1"> | 41_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | <hfoptions id="zero-config">
<hfoption id="ZeRO-1">
ZeRO-1 shards the optimizer states across GPUs, and you can expect a tiny speed up. The ZeRO-1 config can be setup like this:
```yml
{
"zero_optimization": {
"stage": 1
}
}
```
</hfoption>
<hfoption id="ZeRO-2">
ZeRO-2 shards the optimizer and gradients across GPUs. This stage is primarily used for training since its features are not relevant to inference. Some important parameters to configure for better performance include: | 41_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | * `offload_optimizer` should be enabled to reduce GPU memory usage.
* `overlap_comm` when set to `true` trades off increased GPU memory usage to lower allreduce latency. This feature uses 4.5x the `allgather_bucket_size` and `reduce_bucket_size` values. In this example, they're set to `5e8` which means it requires 9GB of GPU memory. If your GPU memory is 8GB or less, you should reduce `overlap_comm` to lower the memory requirements and prevent an out-of-memory (OOM) error. | 41_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | * `allgather_bucket_size` and `reduce_bucket_size` trade off available GPU memory for communication speed. The smaller their values, the slower communication is and the more GPU memory is available. You can balance, for example, whether a bigger batch size is more important than a slightly slower training time. | 41_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | * `round_robin_gradients` is available in DeepSpeed 0.4.4 for CPU offloading. It parallelizes gradient copying to CPU memory among ranks by fine-grained gradient partitioning. Performance benefit grows with gradient accumulation steps (more copying between optimizer steps) or GPU count (increased parallelism).
```yml
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true, | 41_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | "device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true
"round_robin_gradients": true
}
}
```
</hfoption>
<hfoption id="ZeRO-3"> | 41_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | "contiguous_gradients": true
"round_robin_gradients": true
}
}
```
</hfoption>
<hfoption id="ZeRO-3">
ZeRO-3 shards the optimizer, gradient, and parameters across GPUs. Unlike ZeRO-2, ZeRO-3 can also be used for inference, in addition to training, because it allows large models to be loaded on multiple GPUs. Some important parameters to configure include: | 41_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | * `device: "cpu"` can help if you're running out of GPU memory and if you have free CPU memory available. This allows offloading model parameters to the CPU.
* `pin_memory: true` can improve throughput, but less memory becomes available for other processes because the pinned memory is reserved for the specific process that requested it and it's typically accessed much faster than normal CPU memory. | 41_7_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | * `stage3_max_live_parameters` is the upper limit on how many full parameters you want to keep on the GPU at any given time. Reduce this value if you encounter an OOM error. | 41_7_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | * `stage3_max_reuse_distance` is a value for determining when a parameter is used again in the future, and it helps decide whether to throw the parameter away or to keep it. If the parameter is going to be reused (if the value is less than `stage3_max_reuse_distance`), then it is kept to reduce communication overhead. This is super helpful when activation checkpointing is enabled and you want to keep the parameter in the forward recompute until the backward pass. But reduce this value if you encounter an | 41_7_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | and you want to keep the parameter in the forward recompute until the backward pass. But reduce this value if you encounter an OOM error. | 41_7_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | * `stage3_gather_16bit_weights_on_model_save` consolidates fp16 weights when a model is saved. For large models and multiple GPUs, this is expensive in terms of memory and speed. You should enable it if you're planning on resuming training. | 41_7_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | * `sub_group_size` controls which parameters are updated during the optimizer step. Parameters are grouped into buckets of `sub_group_size` and each bucket is updated one at a time. When used with NVMe offload, `sub_group_size` determines when model states are moved in and out of CPU memory from during the optimization step. This prevents running out of CPU memory for extremely large models. `sub_group_size` can be left to its default value if you aren't using NVMe offload, but you may want to change it if | 41_7_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | models. `sub_group_size` can be left to its default value if you aren't using NVMe offload, but you may want to change it if you: | 41_7_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | 1. Run into an OOM error during the optimizer step. In this case, reduce `sub_group_size` to reduce memory usage of the temporary buffers.
2. The optimizer step is taking a really long time. In this case, increase `sub_group_size` to improve bandwidth utilization as a result of increased data buffers. | 41_7_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | * `reduce_bucket_size`, `stage3_prefetch_bucket_size`, and `stage3_param_persistence_threshold` are dependent on a model's hidden size. It is recommended to set these values to `auto` and allow the [`Trainer`] to automatically assign the values.
```yml
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9, | 41_7_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | "device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
}
``` | 41_7_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | "stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
You can use the [`deepspeed.zero.Init`](https://deepspeed.readthedocs.io/en/latest/zero3.html#deepspeed.zero.Init) context manager to initialize a model faster:
```py
from transformers import T5ForConditionalGeneration, T5Config
import deepspeed | 41_7_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | with deepspeed.zero.Init():
config = T5Config.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration(config)
```
For pretrained models, the DeepSped config file needs to have `is_deepspeed_zero3_enabled: true` setup in [`TrainingArguments`] and it needs a ZeRO configuration enabled. The [`TrainingArguments`] object must be created **before** calling the model [`~PreTrainedModel.from_pretrained`].
```py
from transformers import AutoModel, Trainer, TrainingArguments | 41_7_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | training_args = TrainingArguments(..., deepspeed=ds_config)
model = AutoModel.from_pretrained("google-t5/t5-small")
trainer = Trainer(model=model, args=training_args, ...)
```
You'll need ZeRO-3 if the fp16 weights don't fit on a single GPU. If you're able to load fp16 weights, then make sure you specify `torch_dtype=torch.float16` in [`~PreTrainedModel.from_pretrained`]. | 41_7_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | Another consideration for ZeRO-3 is if you have multiple GPUs, no single GPU has all the parameters unless it's the parameters for the currently executing layer. To access all parameters from all the layers at once, such as loading pretrained model weights in [`~PreTrainedModel.from_pretrained`], one layer is loaded at a time and immediately partitioned to all GPUs. This is because for very large models, it isn't possible to load the weights on one GPU and then distribute them across the other GPUs due to | 41_7_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | for very large models, it isn't possible to load the weights on one GPU and then distribute them across the other GPUs due to memory limitations. | 41_7_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | If you encounter a model parameter weight that looks like the following, where `tensor([1.])` or the parameter size is 1 instead of a larger multi-dimensional shape, this means the parameter is partitioned and this is a ZeRO-3 placeholder.
```py
tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True)
```
<Tip> | 41_7_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-configuration | .md | ```py
tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True)
```
<Tip>
For more information about initializing large models with ZeRO-3 and accessing the parameters, take a look at the [Constructing Massive Models](https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models) and [Gathering Parameters](https://deepspeed.readthedocs.io/en/latest/zero3.html#gathering-parameters) guides.
</Tip>
</hfoption>
</hfoptions> | 41_7_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nvme-configuration | .md | [ZeRO-Infinity](https://hf.co/papers/2104.07857) allows offloading model states to the CPU and/or NVMe to save even more memory. Smart partitioning and tiling algorithms allow each GPU to send and receive very small amounts of data during offloading such that a modern NVMe can fit an even larger total memory pool than is available to your training process. ZeRO-Infinity requires ZeRO-3. | 41_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nvme-configuration | .md | Depending on the CPU and/or NVMe memory available, you can offload both the [optimizer states](https://www.deepspeed.ai/docs/config-json/#optimizer-offloading) and [parameters](https://www.deepspeed.ai/docs/config-json/#parameter-offloading), just one of them, or none. You should also make sure the `nvme_path` is pointing to an NVMe device, because while it still works with a normal hard drive or solid state drive, it'll be significantly slower. With a modern NVMe, you can expect peak transfer speeds of | 41_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nvme-configuration | .md | hard drive or solid state drive, it'll be significantly slower. With a modern NVMe, you can expect peak transfer speeds of ~3.5GB/s for read and ~3GB/s for write operations. Lastly, [run a benchmark](https://github.com/microsoft/DeepSpeed/issues/998) on your training setup to determine the optimal `aio` configuration. | 41_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nvme-configuration | .md | The example ZeRO-3/Infinity configuration file below sets most of the parameter values to `auto`, but you could also manually add these values.
```yml
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}, | 41_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nvme-configuration | .md | "optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}, | 41_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nvme-configuration | .md | "zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 4,
"fast_init": false
},
"offload_param": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 5,
"buffer_size": 1e8,
"max_in_cpu": 1e9
},
"aio": {
"block_size": 262144,
"queue_depth": 32,
"thread_count": 1,
"single_submit": false,
"overlap_events": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9, | 41_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nvme-configuration | .md | "single_submit": false,
"overlap_events": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}, | 41_8_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nvme-configuration | .md | "gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
``` | 41_8_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed-features | .md | There are a number of important parameters to specify in the DeepSpeed configuration file which are briefly described in this section. | 41_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#activationgradient-checkpointing | .md | Activation and gradient checkpointing trades speed for more GPU memory which allows you to overcome scenarios where your GPU is out of memory or to increase your batch size for better performance. To enable this feature:
1. For a Hugging Face model, set `model.gradient_checkpointing_enable()` or `--gradient_checkpointing` in the [`Trainer`]. | 41_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#activationgradient-checkpointing | .md | 2. For a non-Hugging Face model, use the DeepSpeed [Activation Checkpointing API](https://deepspeed.readthedocs.io/en/latest/activation-checkpointing.html). You could also replace the Transformers modeling code and replace `torch.utils.checkpoint` with the DeepSpeed API. This approach is more flexible because you can offload the forward activations to the CPU memory instead of recalculating them. | 41_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#optimizer-and-scheduler | .md | DeepSpeed and Transformers optimizer and scheduler can be mixed and matched as long as you don't enable `offload_optimizer`. When `offload_optimizer` is enabled, you could use a non-DeepSpeed optimizer (except for LAMB) as long as it has both a CPU and GPU implementation.
<Tip warning={true}> | 41_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#optimizer-and-scheduler | .md | <Tip warning={true}>
The optimizer and scheduler parameters for the config file can be set from the command line to avoid hard to find errors. For example, if the learning rate is set to a different value in another place you can override it from the command line. Aside from the optimizer and scheduler parameters, you'll need to ensure your [`Trainer`] command line arguments match the DeepSpeed configuration.
</Tip>
<hfoptions id="opt-sched">
<hfoption id="optimizer"> | 41_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#optimizer-and-scheduler | .md | <hfoption id="optimizer">
DeepSpeed offers several [optimizers](https://www.deepspeed.ai/docs/config-json/#optimizer-parameters) (Adam, AdamW, OneBitAdam, and LAMB) but you can also import other optimizers from PyTorch. If you don't configure the optimizer in the config, the [`Trainer`] automatically selects AdamW and either uses the supplied values or the default values for the following parameters from the command line: `lr`, `adam_beta1`, `adam_beta2`, `adam_epsilon`, `weight_decay`. | 41_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#optimizer-and-scheduler | .md | You can set the parameters to `"auto"` or manually input your own desired values.
```yaml
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
}
}
```
You can also use an unsupported optimizer by adding the following to the top level configuration.
```yaml
{
"zero_allow_untested_optimizer": true
}
``` | 41_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#optimizer-and-scheduler | .md | ```yaml
{
"zero_allow_untested_optimizer": true
}
```
From DeepSpeed==0.8.3 on, if you want to use offload, you'll also need to the following to the top level configuration because offload works best with DeepSpeed's CPU Adam optimizer.
```yaml
{
"zero_force_ds_cpu_optimizer": false
}
```
</hfoption>
<hfoption id="scheduler">
DeepSpeed supports the LRRangeTest, OneCycle, WarmupLR and WarmupDecayLR learning rate [schedulers](https://www.deepspeed.ai/docs/config-json/#scheduler-parameters). | 41_11_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#optimizer-and-scheduler | .md | Transformers and DeepSpeed provide two of the same schedulers:
* WarmupLR is the same as `--lr_scheduler_type constant_with_warmup` in Transformers
* WarmupDecayLR is the same as `--lr_scheduler_type linear` in Transformers (this is the default scheduler used in Transformers) | 41_11_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#optimizer-and-scheduler | .md | If you don't configure the scheduler in the config, the [`Trainer`] automatically selects WarmupDecayLR and either uses the supplied values or the default values for the following parameters from the command line: `warmup_min_lr`, `warmup_max_lr`, `warmup_num_steps`, `total_num_steps` (automatically calculated during run time if `max_steps` is not provided).
You can set the parameters to `"auto"` or manually input your own desired values.
```yaml
{
"scheduler": {
"type": "WarmupDecayLR",
"params": { | 41_11_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#optimizer-and-scheduler | .md | ```yaml
{
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
```
</hfoption>
</hfoptions> | 41_11_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#precision | .md | Deepspeed supports fp32, fp16, and bf16 mixed precision.
<hfoptions id="precision">
<hfoption id="fp32">
If your model doesn't work well with mixed precision, for example if it wasn't pretrained in mixed precision, you may encounter overflow or underflow issues which can cause NaN loss. For these cases, you should use full fp32 precision by explicitly disabling the default fp16 mode.
```yaml
{
"fp16": {
"enabled": false
}
}
``` | 41_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#precision | .md | ```yaml
{
"fp16": {
"enabled": false
}
}
```
For Ampere GPUs and PyTorch > 1.7, it automatically switches to the more efficient [tf32](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) format for some operations but the results are still in fp32. You can control it from the [`Trainer`] by setting `--tf32` to enable it, and `--tf32 0` or `--no_tf32` to disable it.
</hfoption>
<hfoption id="fp16"> | 41_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#precision | .md | </hfoption>
<hfoption id="fp16">
To configure PyTorch AMP-like fp16 mixed precision reduces memory usage and accelerates training speed. [`Trainer`] automatically enables or disables fp16 based on the value of `args.fp16_backend`, and the rest of the config can be set by you. fp16 is enabled from the command line when the following arguments are passed: `--fp16`, `--fp16_backend amp` or `--fp16_full_eval`.
```yaml
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000, | 41_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#precision | .md | ```yaml
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
For additional DeepSpeed fp16 training options, take a look at the [FP16 Training Options](https://www.deepspeed.ai/docs/config-json/#fp16-training-options) reference. | 41_12_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#precision | .md | To configure Apex-like fp16 mixed precision, setup the config as shown below with `"auto"` or your own values. [`Trainer`] automatically configure `amp` based on the values of `args.fp16_backend` and `args.fp16_opt_level`. It can also be enabled from the command line when the following arguments are passed: `--fp16`, `--fp16_backend apex` or `--fp16_opt_level 01`.
```yaml
{
"amp": {
"enabled": "auto",
"opt_level": "auto"
}
}
```
</hfoption>
<hfoption id="bf16"> | 41_12_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#precision | .md | ```yaml
{
"amp": {
"enabled": "auto",
"opt_level": "auto"
}
}
```
</hfoption>
<hfoption id="bf16">
To use bf16, you'll need at least DeepSpeed==0.6.0. bf16 has the same dynamic range as fp32 and doesn’t require loss scaling. However, if you use [gradient accumulation](#gradient-accumulation) with bf16, gradients are accumulated in bf16 which may not be desired because this format's low precision can lead to lossy accumulation. | 41_12_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#precision | .md | bf16 can be setup in the config file or enabled from the command line when the following arguments are passed: `--bf16` or `--bf16_full_eval`.
```yaml
{
"bf16": {
"enabled": "auto"
}
}
```
</hfoption>
</hfoptions> | 41_12_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#batch-size | .md | The batch size can be auto-configured or explicitly set. If you choose to use the `"auto"` option, [`Trainer`] sets `train_micro_batch_size_per_gpu` to the value of args.`per_device_train_batch_size` and `train_batch_size` to `args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps`.
```yaml
{
"train_micro_batch_size_per_gpu": "auto",
"train_batch_size": "auto"
}
``` | 41_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#gradient-accumulation | .md | Gradient accumulation can be auto-configured or explicitly set. If you choose to use the `"auto"` option, [`Trainer`] sets it to the value of `args.gradient_accumulation_steps`.
```yaml
{
"gradient_accumulation_steps": "auto"
}
``` | 41_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#gradient-clipping | .md | Gradient clipping can be auto-configured or explicitly set. If you choose to use the `"auto"` option, [`Trainer`] sets it to the value of `args.max_grad_norm`.
```yaml
{
"gradient_clipping": "auto"
}
``` | 41_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#communication-data-type | .md | For communication collectives like reduction, gathering and scattering operations, a separate data type is used.
All gather and scatter operations are performed in the same data type the data is in. For example, if you're training with bf16, the data is also gathered in bf16 because gathering is a non-lossy operation. | 41_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#communication-data-type | .md | Reduce operations are lossy, for example when gradients are averaged across multiple GPUs. When the communication is done in fp16 or bf16, it is more likely to be lossy because adding multiple numbers in low precision isn't exact. This is especially the case with bf16 which has a lower precision than fp16. For this reason, fp16 is the default for reduction operations because the loss is minimal when averaging gradients. | 41_16_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#communication-data-type | .md | You can choose the communication data type by setting the `communication_data_type` parameter in the config file. For example, choosing fp32 adds a small amount of overhead but ensures the reduction operation is accumulated in fp32 and when it is ready, it is downcasted to whichever half-precision dtype you're training in.
```yaml
{
"communication_data_type": "fp32"
}
``` | 41_16_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#universal-checkpointing | .md | [Universal Checkpointing](https://www.deepspeed.ai/tutorials/universal-checkpointing) is an efficient and flexible feature for saving and loading model checkpoints. It enables seamless model training continuation and fine-tuning across different model architectures, parallelism techniques, and training configurations.
Resume training with a universal checkpoint by setting [load_universal](https://www.deepspeed.ai/docs/config-json/#checkpoint-options) to `true` in the config file.
```yaml
{ | 41_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#universal-checkpointing | .md | ```yaml
{
"checkpoint": {
"load_universal": true
}
}
``` | 41_17_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deployment | .md | DeepSpeed can be deployed by different launchers such as [torchrun](https://pytorch.org/docs/stable/elastic/run.html), the `deepspeed` launcher, or [Accelerate](https://huggingface.co/docs/accelerate/basic_tutorials/launch#using-accelerate-launch). To deploy, add `--deepspeed ds_config.json` to the [`Trainer`] command line. It’s recommended to use DeepSpeed’s [`add_config_arguments`](https://deepspeed.readthedocs.io/en/latest/initialize.html#argument-parsing) utility to add any necessary command line | 41_18_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deployment | .md | utility to add any necessary command line arguments to your code. | 41_18_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deployment | .md | This guide will show you how to deploy DeepSpeed with the `deepspeed` launcher for different training setups. You can check out this [post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400) for more practical usage examples.
<hfoptions id="deploy">
<hfoption id="multi-GPU">
To deploy DeepSpeed on multiple GPUs, add the `--num_gpus` parameter. If you want to use all available GPUs, you don't need to add `--num_gpus`. The example below uses 2 GPUs.
```bash | 41_18_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deployment | .md | ```bash
deepspeed --num_gpus=2 examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
</hfoption>
<hfoption id="single-GPU"> | 41_18_3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.