source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#hyperparameter-search-using-trainer-api | .md | 🤗 Transformers provides a [`Trainer`] class optimized for training 🤗 Transformers models, making it easier to start training without manually writing your own training loop. The [`Trainer`] provides API for hyperparameter search. This doc shows how to enable it in example. | 34_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#hyperparameter-search-backend | .md | [`Trainer`] supports four hyperparameter search backends currently:
[optuna](https://optuna.org/), [sigopt](https://sigopt.com/), [raytune](https://docs.ray.io/en/latest/tune/index.html) and [wandb](https://wandb.ai/site/sweeps).
you should install them before using them as the hyperparameter search backend
```bash
pip install optuna/sigopt/wandb/ray[tune]
``` | 34_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | Define the hyperparameter search space, different backends need different format.
For sigopt, see sigopt [object_parameter](https://docs.sigopt.com/ai-module-api-references/api_reference/objects/object_parameter), it's like following:
```py
>>> def sigopt_hp_space(trial):
... return [
... {"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"},
... {
... "categorical_values": ["16", "32", "64", "128"], | 34_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | ... {
... "categorical_values": ["16", "32", "64", "128"],
... "name": "per_device_train_batch_size",
... "type": "categorical",
... },
... ]
```
For optuna, see optuna [object_parameter](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/002_configurations.html#sphx-glr-tutorial-10-key-features-002-configurations-py), it's like following:
```py
>>> def optuna_hp_space(trial):
... return { | 34_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | ```py
>>> def optuna_hp_space(trial):
... return {
... "learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True),
... "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]),
... }
``` | 34_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | ... }
```
Optuna provides multi-objective HPO. You can pass `direction` in `hyperparameter_search` and define your own compute_objective to return multiple objective values. The Pareto Front (`List[BestRun]`) will be returned in hyperparameter_search, you should refer to the test case `TrainerHyperParameterMultiObjectOptunaIntegrationTest` in [test_trainer](https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py). It's like following
```py | 34_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | ```py
>>> best_trials = trainer.hyperparameter_search(
... direction=["minimize", "maximize"],
... backend="optuna",
... hp_space=optuna_hp_space,
... n_trials=20,
... compute_objective=compute_objective,
... )
```
For raytune, see raytune [object_parameter](https://docs.ray.io/en/latest/tune/api/search_space.html), it's like following:
```py
>>> def ray_hp_space(trial):
... return {
... "learning_rate": tune.loguniform(1e-6, 1e-4), | 34_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | ```py
>>> def ray_hp_space(trial):
... return {
... "learning_rate": tune.loguniform(1e-6, 1e-4),
... "per_device_train_batch_size": tune.choice([16, 32, 64, 128]),
... }
```
For wandb, see wandb [object_parameter](https://docs.wandb.ai/guides/sweeps/configuration), it's like following:
```py
>>> def wandb_hp_space(trial):
... return {
... "method": "random",
... "metric": {"name": "objective", "goal": "minimize"},
... "parameters": { | 34_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | ... "method": "random",
... "metric": {"name": "objective", "goal": "minimize"},
... "parameters": {
... "learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4},
... "per_device_train_batch_size": {"values": [16, 32, 64, 128]},
... },
... }
```
Define a `model_init` function and pass it to the [`Trainer`], as an example:
```py
>>> def model_init(trial):
... return AutoModelForSequenceClassification.from_pretrained( | 34_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | ```py
>>> def model_init(trial):
... return AutoModelForSequenceClassification.from_pretrained(
... model_args.model_name_or_path,
... from_tf=bool(".ckpt" in model_args.model_name_or_path),
... config=config,
... cache_dir=model_args.cache_dir,
... revision=model_args.model_revision,
... token=True if model_args.use_auth_token else None,
... )
``` | 34_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | ... revision=model_args.model_revision,
... token=True if model_args.use_auth_token else None,
... )
```
Create a [`Trainer`] with your `model_init` function, training arguments, training and test datasets, and evaluation function:
```py
>>> trainer = Trainer(
... model=None,
... args=training_args,
... train_dataset=small_train_dataset,
... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... processing_class=tokenizer, | 34_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | ... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... processing_class=tokenizer,
... model_init=model_init,
... data_collator=data_collator,
... )
```
Call hyperparameter search, get the best trial parameters, backend could be `"optuna"`/`"sigopt"`/`"wandb"`/`"ray"`. direction can be`"minimize"` or `"maximize"`, which indicates whether to optimize greater or lower objective. | 34_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#how-to-enable-hyperparameter-search-in-example | .md | You could define your own compute_objective function, if not defined, the default compute_objective will be called, and the sum of eval metric like f1 is returned as objective value.
```py
>>> best_trial = trainer.hyperparameter_search(
... direction="maximize",
... backend="optuna",
... hp_space=optuna_hp_space,
... n_trials=20,
... compute_objective=compute_objective,
... )
``` | 34_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/hpo_train.md | https://huggingface.co/docs/transformers/en/hpo_train/#hyperparameter-search-for-ddp-finetune | .md | Currently, Hyperparameter search for DDP is enabled for optuna and sigopt. Only the rank-zero process will generate the search trial and pass the argument to other ranks. | 34_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#modular-transformers | .md | `transformers` is an opinionated framework; our philosophy is defined in the following [conceptual guide](./philosophy).
The core of that philosophy is exemplified by the [single model, single file](https://huggingface.co/blog/transformers-design-philosophy)
aspect of the library. This component's downside is that it limits the inheritance and importability of components from
files to others in the toolkit. | 35_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#modular-transformers | .md | files to others in the toolkit.
As a result, model components tend to be repeated across many files. There are as many attention layers defined
in `transformers` as there are models, and a significant number of those are identical to each other.
The unfortunate consequence is that independent implementations tend to diverge as fixes and changes get applied
to specific parts of the code. | 35_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#modular-transformers | .md | to specific parts of the code.
In order to balance this issue, we introduced the concept of "copies" across the library. By adding a comment indicating
that code is a copy of another, we can enforce through CI and local commands that copies do not diverge. However,
while the complexity is low, this is often quite tedious to do.
And, finally, this contributes to adding a significant overhead to contributing models which we would like to remove. | 35_0_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#modular-transformers | .md | And, finally, this contributes to adding a significant overhead to contributing models which we would like to remove.
This approach often requires model contributions to add modeling code (~1k lines), processor (~500 lines), tests, docs,
etc. Model contribution PRs rarely add less than 3-5k lines of code, with much of this code being boilerplate.
This raises the bar for contributions, and with Modular Transformers, we're aiming to lower the bar to a much more
acceptable point. | 35_0_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#modular-transformers | .md | acceptable point.
If you plan to add a model to `transformers` make sure you read [How to add a model to 🤗 Transformers?](https://huggingface.co/docs/transformers/add_new_model).
For any kind of contributions, see [CONTRIBUTING.md](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md). | 35_0_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#what-is-it | .md | Modular Transformers introduces the concept of a "modular" file to a model folder. This modular file accepts code
that isn't typically accepted in modeling/processing files, as it allows importing from neighbouring models as well
as inheritance from classes to others.
This modular file defines models, processors, and the configuration class that would otherwise be defined in their
respective modules. | 35_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#what-is-it | .md | respective modules.
Finally, this feature introduces a new `linter` which will "unravel" the modular file into the "single model, single
file" directory structure. These files will get auto-generated every time the script is run; reducing the required
contributions to the modular file, and therefore only to the changes between the contributed model and others.
Model users will end up importing and using the single-file interface, so no change is expected here. Doing this, we | 35_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#what-is-it | .md | Model users will end up importing and using the single-file interface, so no change is expected here. Doing this, we
hope to combine the best of both worlds: enabling simple contributions while sticking to our philosophy.
This is therefore a replacement for the `# Copied from` markers, and previously contributed models can be expected to
be moved to the new Modular Transformers format in the coming months. | 35_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#details | .md | To generate a single file from the modular file, run the following command.
```bash
python utils/modular_model_converter.py --files-to-parse src/transformers/models/<your_model>/modular_<your_model>.py
```
The "linter", which unravels the inheritance and creates all single-files from the modular file, will flatten the
inheritance while trying to be invisible to Python users. At this time, the linter flattens a **single** level of
inheritance.
For example: | 35_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#details | .md | inheritance.
For example:
- If a configuration class inherits from another and adds/deletes an argument, the generated file will either directly
reference it (in case of addition) or completely remove it (in case of deletion).
- If a class inherits from another, for example: class GemmaModel(LlamaModel):, dependencies are automatically
inferred. All submodules will be automatically inferred from the superclass. | 35_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#details | .md | inferred. All submodules will be automatically inferred from the superclass.
- If you define new functions in the `modular` and use them inside classes, the linter will automatically infer the
You should be able to write everything (the tokenizer, the image processor, the model, the config) in this `modular`
file, and the corresponding files will be created for you. | 35_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#enforcement | .md | Run the command below to ensure the generated content matches `modular_<your_model>.py`
```bash
python utils/check_modular_conversion.py --files src/transformers/models/<your_model>/modular_<your_model>.py
``` | 35_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#examples | .md | Here is a quick example with BERT and RoBERTa. The two models are intimately related: their modeling implementation
differs solely by a change in the embedding layer.
Instead of redefining the model entirely, here is what the `modular_roberta.py` file looks like for the modeling &
configuration classes (for the sake of the example, the tokenizer is ignored at this time as very different).
```python
from torch import nn
from ..bert.configuration_bert import BertConfig
from ..bert.modeling_bert import ( | 35_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#examples | .md | ```python
from torch import nn
from ..bert.configuration_bert import BertConfig
from ..bert.modeling_bert import (
BertModel,
BertEmbeddings,
BertForMaskedLM
) | 35_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#examples | .md | # The RoBERTa config is identical to BERT's config
class RobertaConfig(BertConfig):
model_type = 'roberta'
# We redefine the embeddings here to highlight the padding ID difference, and we redefine the position embeddings
class RobertaEmbeddings(BertEmbeddings):
def __init__(self, config):
super().__init__(config())
self.padding_idx = config.pad_token_id
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx
) | 35_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#examples | .md | # The RoBERTa model is identical to the BERT model, except for the embedding layer.
# We redefine the embeddings above, so here there is no need to do additional work
class RobertaModel(BertModel):
def __init__(self, config):
super().__init__(config)
self.embeddings = RobertaEmbeddings(config) | 35_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#examples | .md | # The heads now only need to redefine the model inside to the correct `RobertaModel`
class RobertaForMaskedLM(BertForMaskedLM):
def __init__(self, config):
super().__init__(config)
self.model = RobertaModel(config)
```
Note that if you do not use the dependency that you defined, you will have the following error:
```bash
ValueError: You defined `RobertaEmbeddings` in the modular_roberta.py, it should be used
when you define `BertModel`, as it is one of it's direct dependencies. Make sure | 35_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#examples | .md | when you define `BertModel`, as it is one of it's direct dependencies. Make sure
you use it in the `__init__` function.
```
Additionally, you may find a list of examples here: | 35_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#what-it-is-not | .md | It is not a replacement for the modeling code (yet?), and if your model is not based on anything else that ever existed, then you can add a `modeling` file as usual. | 35_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#removing-attributes-and-functions | .md | To remove attributes that are not used in your modular model, and that you don't want to see in the unravelled modeling:
```python
class GemmaModel(LlamaModel): | class GemmaModel(PreTrainedModel):
def __init__(self, config): | def __init__(self, config):
super().__init__(self, eos_token) | super().__init__(config)
del self.embed_tokens | self.padding_idx = config.pad_token_id | 35_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#removing-attributes-and-functions | .md | del self.embed_tokens | self.padding_idx = config.pad_token_id
| self.vocab_size = config.vocab_size
|
| self.layers = nn.ModuleList(
| [LlamaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
| )
| self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
| self.rotary_emb = LlamaRotaryEmbedding(config=config) | 35_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#removing-attributes-and-functions | .md | | self.rotary_emb = LlamaRotaryEmbedding(config=config)
| self.gradient_checkpointing = False
|
| # Initialize weights and apply final processing
| self.post_init()
```
If you check the original `LlamaModel`, it has a `embed_tokens` which was removed here (as you would expect!) | 35_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#removing-attributes-and-functions | .md | ```
If you check the original `LlamaModel`, it has a `embed_tokens` which was removed here (as you would expect!)
Removing a function is pretty similar, you just need to write it with a `raise ValueError("")` to mimick the behaviour you actually want when you remove a parent function in python.
```python
class GemmaTokenizer(LlamaTokenizer):
... | 35_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#removing-attributes-and-functions | .md | def get_spm_processor(self):
raise AttributeError("Not needed for Gemma")
def unk_token_length(self):
raise AttributeError("Not needed for Gemma")
``` | 35_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#define-new-functions | .md | If you define a new function in the `modular` file to be used inside a class, say
```python
def my_new_function(*args, **kwargs):
# Do something here
pass
class GemmaModel(LlamaModel):
def forward(*args, **kwargs):
# Call the function
example = my_new_function(*args, **kwargs)
# continue here
```
the `my_new_function` function (and, recursively, any other new functions called in its body) will be automatically copy-pasted
in the file where it is used. | 35_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#calling-super | .md | We recently shipped a few features that allow you to go from:
```python
class GemmaTokenizer(LlamaTokenizer, PretrainedTokenizerFast): | class GemmaModel(nn.Module):
def __init__(self, eos_token="</s>"): | def __init__(self):
eos_token = AddedToken(eos_token) | eos_token = AddedToken(eos_token)
PretrainedTokenizerFast.__init__(self, eos_token) | super().__init__(eos_token)
``` | 35_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#calling-super | .md | PretrainedTokenizerFast.__init__(self, eos_token) | super().__init__(eos_token)
```
This is useful want you **don't** want to unravel the call to `super()`, and you want to differentiate which super init call you are doing! | 35_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/modular_transformers.md | https://huggingface.co/docs/transformers/en/modular_transformers/#special-naming | .md | We now also support special cases like
```python
class GemmaVisionModel(CLIPModel):
pass
```
where the name of your class `GemmaVision` is not the same as the modular `Gemma`. This is super useful for composite models. | 35_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 36_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 36_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#efficient-training-on-multiple-gpus | .md | If training a model on a single GPU is too slow or if the model's weights do not fit in a single GPU's memory, transitioning
to a multi-GPU setup may be a viable option. Prior to making this transition, thoroughly explore all the strategies covered
in the [Methods and tools for efficient training on a single GPU](perf_train_gpu_one) as they are universally applicable
to model training on any number of GPUs. Once you have employed those strategies and found them insufficient for your | 36_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#efficient-training-on-multiple-gpus | .md | to model training on any number of GPUs. Once you have employed those strategies and found them insufficient for your
case on a single GPU, consider moving to multiple GPUs.
Transitioning from a single GPU to multiple GPUs requires the introduction of some form of parallelism, as the workload
must be distributed across the resources. Multiple techniques can be employed to achieve parallelism, such as data | 36_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#efficient-training-on-multiple-gpus | .md | must be distributed across the resources. Multiple techniques can be employed to achieve parallelism, such as data
parallelism, tensor parallelism, and pipeline parallelism. It's important to note that there isn't a one-size-fits-all
solution, and the optimal settings depend on the specific hardware configuration you are using.
This guide offers an in-depth overview of individual types of parallelism, as well as guidance on ways to combine | 36_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#efficient-training-on-multiple-gpus | .md | This guide offers an in-depth overview of individual types of parallelism, as well as guidance on ways to combine
techniques and choosing an appropriate approach. For step-by-step tutorials on distributed training, please refer to
the [🤗 Accelerate documentation](https://huggingface.co/docs/accelerate/index).
<Tip>
While the main concepts discussed in this guide are likely applicable across frameworks, here we focus on
PyTorch-based implementations.
</Tip> | 36_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#efficient-training-on-multiple-gpus | .md | PyTorch-based implementations.
</Tip>
Before diving deeper into the specifics of each technique, let's go over the rough decision process when training
large models on a large infrastructure. | 36_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#scalability-strategy | .md | Begin by estimating how much vRAM is required to train your model. For models hosted on the 🤗 Hub, use our
[Model Memory Calculator](https://huggingface.co/spaces/hf-accelerate/model-memory-usage), which gives you
accurate calculations within a few percent margin.
**Parallelization strategy for a single Node / multi-GPU setup**
When training a model on a single node with multiple GPUs, your choice of parallelization strategy can significantly
impact performance. Here's a breakdown of your options: | 36_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#scalability-strategy | .md | impact performance. Here's a breakdown of your options:
**Case 1: Your model fits onto a single GPU**
If your model can comfortably fit onto a single GPU, you have two primary options:
1. DDP - Distributed DataParallel
2. [Zero Redundancy Optimizer (ZeRO)](https://arxiv.org/abs/1910.02054) - depending on the situation and configuration used, this method may or may not be faster, however, it's worth experimenting with it.
**Case 2: Your model doesn't fit onto a single GPU:** | 36_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#scalability-strategy | .md | **Case 2: Your model doesn't fit onto a single GPU:**
If your model is too large for a single GPU, you have several alternatives to consider:
1. PipelineParallel (PP)
2. [ZeRO](https://arxiv.org/abs/1910.02054)
3. [TensorParallel](#tensor-parallelism) (TP)
With very fast inter-node connectivity (e.g., NVLINK or NVSwitch) all three strategies (PP, ZeRO, TP) should result in
similar performance. However, without these, PP will be faster than TP or ZeRO. The degree of TP may also | 36_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#scalability-strategy | .md | similar performance. However, without these, PP will be faster than TP or ZeRO. The degree of TP may also
make a difference. It's best to experiment with your specific setup to determine the most suitable strategy.
TP is almost always used within a single node. That is TP size <= GPUs per node.
**Case 3: Largest layer of your model does not fit onto a single GPU** | 36_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#scalability-strategy | .md | **Case 3: Largest layer of your model does not fit onto a single GPU**
1. If you are not using ZeRO, you have to use TensorParallel (TP), because PipelineParallel (PP) alone won't be sufficient to accommodate the large layer.
2. If you are using ZeRO, additionally adopt techniques from the [Methods and tools for efficient training on a single GPU](perf_train_gpu_one).
**Parallelization strategy for a multi-Node / multi-GPU setup** | 36_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#scalability-strategy | .md | **Parallelization strategy for a multi-Node / multi-GPU setup**
* When you have fast inter-node connectivity (e.g., NVLINK or NVSwitch) consider using one of these options:
1. ZeRO - as it requires close to no modifications to the model
2. A combination of PipelineParallel(PP) with TensorParallel(TP) and DataParallel(DP) - this approach will result in fewer communications, but requires significant changes to the model
* When you have slow inter-node connectivity and still low on GPU memory: | 36_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#scalability-strategy | .md | * When you have slow inter-node connectivity and still low on GPU memory:
1. Employ a combination of DataParallel(DP) with PipelineParallel(PP), TensorParallel(TP), and ZeRO.
In the following sections of this guide we dig deeper into how these different parallelism methods work. | 36_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#data-parallelism | .md | Even with only 2 GPUs, you can readily leverage the accelerated training capabilities offered by PyTorch's built-in features,
such as `DataParallel` (DP) and `DistributedDataParallel` (DDP). Note that
[PyTorch documentation](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html) recommends to prefer
`DistributedDataParallel` (DDP) over `DataParallel` (DP) for multi-GPU training as it works for all models.
Let's take a look at how these two methods work and what makes them different. | 36_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | To understand the key differences in inter-GPU communication overhead between the two methods, let's review the processes per batch:
[DDP](https://pytorch.org/docs/master/notes/ddp.html):
- At the start time the main process replicates the model once from GPU 0 to the rest of GPUs
- Then for each batch:
1. Each GPU directly consumes its mini-batch of data.
2. During `backward`, once the local gradients are ready, they are averaged across all processes. | 36_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | 2. During `backward`, once the local gradients are ready, they are averaged across all processes.
[DP](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html):
For each batch:
1. GPU 0 reads the batch of data and then sends a mini-batch to each GPU.
2. The up-to-date model is replicated from GPU 0 to each GPU.
3. `forward` is executed, and output from each GPU is sent to GPU 0 to compute the loss.
4. The loss is distributed from GPU 0 to all GPUs, and `backward` is run. | 36_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | 4. The loss is distributed from GPU 0 to all GPUs, and `backward` is run.
5. Gradients from each GPU are sent to GPU 0 and averaged.
Key differences include:
1. DDP performs only a single communication per batch - sending gradients, while DP performs five different data exchanges per batch.
DDP copies data using [torch.distributed](https://pytorch.org/docs/master/distributed.html), while DP copies data within | 36_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | DDP copies data using [torch.distributed](https://pytorch.org/docs/master/distributed.html), while DP copies data within
the process via Python threads (which introduces limitations associated with GIL). As a result, **`DistributedDataParallel` (DDP) is generally faster than `DataParallel` (DP)** unless you have slow GPU card inter-connectivity.
2. Under DP, GPU 0 performs significantly more work than other GPUs, resulting in GPU under-utilization. | 36_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | 2. Under DP, GPU 0 performs significantly more work than other GPUs, resulting in GPU under-utilization.
3. DDP supports distributed training across multiple machines, whereas DP does not.
This is not an exhaustive list of differences between DP and DDP, however, other nuances are out of scope of this guide.
You can get a deeper understanding of these methods by reading this [article](https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/). | 36_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | Let's illustrate the differences between DP and DDP with an experiment. We'll benchmark the differences between DP and
DDP with an added context of NVLink presence:
* Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`).
* Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`.
To disable the NVLink feature on one of the benchmarks, we use `NCCL_P2P_DISABLE=1`.
Here is the benchmarking code and outputs:
**DP**
```bash | 36_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | Here is the benchmarking code and outputs:
**DP**
```bash
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 | 36_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | {'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69}
```
**DDP w/ NVlink**
```bash
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 | 36_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
```
**DDP w/o NVlink**
```bash
rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \
torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 | 36_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
```
Here are the same benchmarking results gathered in a table for convenience:
| Type | NVlink | Time |
| :----- | ----- | ---: |
| 2:DP | Y | 110s |
| 2:DDP | Y | 101s |
| 2:DDP | N | 131s |
As you can see, in this case DP is ~10% slower than DDP with NVlink, but ~15% faster than DDP without NVlink. | 36_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#dataparallel-vs-distributeddataparallel | .md | As you can see, in this case DP is ~10% slower than DDP with NVlink, but ~15% faster than DDP without NVlink.
The real difference will depend on how much data each GPU needs to sync with the others - the more there is to sync,
the more a slow link will impede the overall runtime. | 36_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | ZeRO-powered data parallelism (ZeRO-DP) is illustrated in the following diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/).
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero.png" alt="DeepSpeed-Image-1"/>
</div> | 36_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | </div>
While it may appear complex, it is a very similar concept to `DataParallel` (DP). The difference is that instead of
replicating the full model parameters, gradients and optimizer states, each GPU stores only a slice of it. Then, at
run-time when the full layer parameters are needed just for the given layer, all GPUs synchronize to give each other
parts that they miss.
To illustrate this idea, consider a simple model with 3 layers (La, Lb, and Lc), where each layer has 3 parameters. | 36_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | To illustrate this idea, consider a simple model with 3 layers (La, Lb, and Lc), where each layer has 3 parameters.
Layer La, for example, has weights a0, a1 and a2:
```
La | Lb | Lc
---|----|---
a0 | b0 | c0
a1 | b1 | c1
a2 | b2 | c2
```
If we have 3 GPUs, ZeRO-DP splits the model onto 3 GPUs like so:
```
GPU0:
La | Lb | Lc
---|----|---
a0 | b0 | c0 | 36_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | GPU1:
La | Lb | Lc
---|----|---
a1 | b1 | c1 | 36_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | GPU2:
La | Lb | Lc
---|----|---
a2 | b2 | c2
```
In a way, this is the same horizontal slicing as tensor parallelism, as opposed to Vertical
slicing, where one puts whole layer-groups on different GPUs. Now let's see how this works:
Each of these GPUs will get the usual mini-batch as it works in DP:
```
x0 => GPU0
x1 => GPU1
x2 => GPU2
```
The inputs are passed without modifications as if they would be processed by the original model. | 36_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | x1 => GPU1
x2 => GPU2
```
The inputs are passed without modifications as if they would be processed by the original model.
First, the inputs get to the layer `La`. What happens at this point?
On GPU0: the x0 mini-batch requires the a0, a1, a2 parameters to do its forward path through the layer, but the GPU0 has only a0.
It will get a1 from GPU1 and a2 from GPU2, bringing all the pieces of the model together. | 36_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | It will get a1 from GPU1 and a2 from GPU2, bringing all the pieces of the model together.
In parallel, GPU1 gets another mini-batch - x1. GPU1 has the a1 parameter, but needs a0 and a2, so it gets those from GPU0 and GPU2.
Same happens to GPU2 that gets the mini-batch x2. It gets a0 and a1 from GPU0 and GPU1.
This way each of the 3 GPUs gets the full tensors reconstructed and makes a forward pass with its own mini-batch. | 36_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | This way each of the 3 GPUs gets the full tensors reconstructed and makes a forward pass with its own mini-batch.
As soon as the calculation is done, the data that is no longer needed gets dropped - it's only used during the calculation.
The reconstruction is done efficiently via a pre-fetch.
Then the whole process is repeated for layer Lb, then Lc forward-wise, and then backward Lc -> Lb -> La.
<Tip> | 36_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | Then the whole process is repeated for layer Lb, then Lc forward-wise, and then backward Lc -> Lb -> La.
<Tip>
This mechanism is similar to an efficient group backpacking strategy: person A carries the tent, person B carries the stove,
and person C carries the axe. Each night they all share what they have with others and get from others what they don't have,
and in the morning they pack up their allocated type of gear and continue on their way. This is what ZeRO DP/Sharded DDP is. | 36_5_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | and in the morning they pack up their allocated type of gear and continue on their way. This is what ZeRO DP/Sharded DDP is.
Compare this strategy to the simple one where each person has to carry their own tent, stove and axe (similar to
DataParallel (DP and DDP) in PyTorch), which would be far more inefficient.
</Tip>
While reading the literature on this topic you may encounter the following synonyms: Sharded, Partitioned. | 36_5_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | </Tip>
While reading the literature on this topic you may encounter the following synonyms: Sharded, Partitioned.
If you pay close attention the way ZeRO partitions the model's weights - it looks very similar to tensor parallelism
which will be discussed later. This is because it partitions/shards each layer's weights, unlike vertical model parallelism
which is discussed next.
Implementations:
- [DeepSpeed](https://www.deepspeed.ai/tutorials/zero/) ZeRO-DP stages 1+2+3 | 36_5_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#zero-data-parallelism | .md | which is discussed next.
Implementations:
- [DeepSpeed](https://www.deepspeed.ai/tutorials/zero/) ZeRO-DP stages 1+2+3
- [`Accelerate` integration](https://huggingface.co/docs/accelerate/en/usage_guides/deepspeed)
- [`transformers` integration](main_classes/trainer#trainer-integrations) | 36_5_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | To explain Pipeline parallelism, we'll first look into Naive Model Parallelism (MP), also known as Vertical MP. This approach
involves distributing groups of model layers across multiple GPUs by assigning specific layers to specific GPUs with `.to()`.
As data flows through these layers, it is moved to the same GPU as the layer, while the other layers remain untouched.
We refer to this Model parallelism as "Vertical" because of how models are typically visualized. For example, the | 36_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | We refer to this Model parallelism as "Vertical" because of how models are typically visualized. For example, the
following diagram shows an 8-layer model split vertically into two slices, placing layers 0-3 onto
GPU0 and 4-7 to GPU1:
```
================
| Layer | |
| 0 | |
| 1 | GPU0 |
| 2 | |
| 3 | |
================
| Layer | |
| 4 | |
| 5 | GPU1 |
| 6 | |
| 7 | |
================
``` | 36_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | ================
| Layer | |
| 4 | |
| 5 | GPU1 |
| 6 | |
| 7 | |
================
```
In this example, when data moves from layer 0 to 3, it's no different from regular forward pass. However, passing data
from layer 3 to 4 requires moving it from GPU0 to GPU1, introducing a communication overhead. If the participating
GPUs are on the same compute node (e.g. same physical machine) this copying is fast, but if the GPUs are distributed | 36_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | GPUs are on the same compute node (e.g. same physical machine) this copying is fast, but if the GPUs are distributed
across different compute nodes (e.g. multiple machines), the communication overhead could be substantially greater.
Following that, layers 4 to 7 work as they would in the original model. Upon completion of the 7th layer, there is often
a need to send the data back to layer 0 where the labels are (or alternatively send the labels to the last layer). Now the loss can be | 36_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | computed and the optimizer can do its work.
Naive Model Parallelism comes several shortcomings:
- **All but one GPU are idle at any given moment**: if 4 GPUs are used, it's nearly identical to quadrupling the amount of memory of a single GPU, and ignoring the rest of the hardware. | 36_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | - **Overhead in data transfer between devices**: E.g. 4x 6GB cards will be able to accommodate the same size as 1x 24GB card using naive MP, but a single 24GB card will complete the training faster, because it doesn't have the data copying overhead. But, say, if you have 40GB cards and need to fit a 45GB model you can with 4x 40GB cards (but barely because of the gradient and optimizer states)
- **Copying shared embeddings**: Shared embeddings may need to get copied back and forth between GPUs. | 36_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | - **Copying shared embeddings**: Shared embeddings may need to get copied back and forth between GPUs.
Now that you are familiar with how the naive approach to model parallelism works and its shortcomings, let's look at Pipeline Parallelism (PP).
PP is almost identical to a naive MP, but it solves the GPU idling problem by chunking the incoming batch into micro-batches
and artificially creating a pipeline, which allows different GPUs to concurrently participate in the computation process. | 36_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | and artificially creating a pipeline, which allows different GPUs to concurrently participate in the computation process.
The following illustration from the [GPipe paper](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html)
shows the naive MP on the top, and PP on the bottom:
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-gpipe-bubble.png" alt="MP vs PP"/>
</div> | 36_6_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | </div>
At the bottom of the diagram, you can observe that the Pipeline Parallelism (PP) approach minimizes the number of idle
GPU zones, referred to as 'bubbles'. Both parts of the diagram show a parallelism level of degree 4, meaning that 4 GPUs
are involved in the pipeline. You can see that there's a forward path of 4 pipe stages (F0, F1, F2 and F3) followed by
a backward path in reverse order (B3, B2, B1, and B0). | 36_6_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | a backward path in reverse order (B3, B2, B1, and B0).
PP introduces a new hyperparameter to tune - `chunks`, which determines how many data chunks are sent in a sequence
through the same pipe stage. For example, in the bottom diagram you can see `chunks=4`. GPU0 performs the same
forward path on chunk 0, 1, 2 and 3 (F0,0, F0,1, F0,2, F0,3) and then it waits for other GPUs to do complete their work. | 36_6_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | forward path on chunk 0, 1, 2 and 3 (F0,0, F0,1, F0,2, F0,3) and then it waits for other GPUs to do complete their work.
Only when the other GPUs begin to complete their work, GPU0 starts to work again doing the backward path for chunks
3, 2, 1 and 0 (B0,3, B0,2, B0,1, B0,0).
Note that this is the same concept as gradient accumulation steps. PyTorch uses `chunks`, while DeepSpeed refers
to the same hyperparameter as gradient accumulation steps. | 36_6_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | to the same hyperparameter as gradient accumulation steps.
Because of the chunks, PP introduces the notion of micro-batches (MBS). DP splits the global data batch size into
mini-batches, so if you have a DP degree of 4, a global batch size of 1024 gets split up into 4 mini-batches of
256 each (1024/4). And if the number of `chunks` (or GAS) is 32 we end up with a micro-batch size of 8 (256/32). Each | 36_6_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | 256 each (1024/4). And if the number of `chunks` (or GAS) is 32 we end up with a micro-batch size of 8 (256/32). Each
Pipeline stage works with a single micro-batch at a time. To calculate the global batch size of the DP + PP setup,
use the formula: `mbs * chunks * dp_degree` (`8 * 32 * 4 = 1024`).
With `chunks=1` you end up with the naive MP, which is inefficient. With a large `chunks` value you end up with | 36_6_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | With `chunks=1` you end up with the naive MP, which is inefficient. With a large `chunks` value you end up with
tiny micro-batch sizes which is also inefficient. For this reason, we encourage to experiment with the `chunks` value to
find the one that leads to the most efficient GPUs utilization.
You may notice a bubble of "dead" time on the diagram that can't be parallelized because the last `forward` stage | 36_6_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | You may notice a bubble of "dead" time on the diagram that can't be parallelized because the last `forward` stage
has to wait for `backward` to complete the pipeline. The purpose of finding the best value for `chunks` is to enable a high
concurrent GPU utilization across all participating GPUs which translates to minimizing the size of the bubble.
Pipeline API solutions have been implemented in:
- PyTorch
- DeepSpeed
- Megatron-LM
These come with some shortcomings: | 36_6_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | Pipeline API solutions have been implemented in:
- PyTorch
- DeepSpeed
- Megatron-LM
These come with some shortcomings:
- They have to modify the model quite heavily, because Pipeline requires one to rewrite the normal flow of modules into a `nn.Sequential` sequence of the same, which may require changes to the design of the model. | 36_6_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | - Currently the Pipeline API is very restricted. If you had a bunch of Python variables being passed in the very first stage of the Pipeline, you will have to find a way around it. Currently, the pipeline interface requires either a single Tensor or a tuple of Tensors as the only input and output. These tensors must have a batch size as the very first dimension, since pipeline is going to chunk the mini batch into micro-batches. Possible improvements are being discussed here | 36_6_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | dimension, since pipeline is going to chunk the mini batch into micro-batches. Possible improvements are being discussed here https://github.com/pytorch/pytorch/pull/50693 | 36_6_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | - Conditional control flow at the level of pipe stages is not possible - e.g., Encoder-Decoder models like T5 require special workarounds to handle a conditional encoder stage.
- They have to arrange each layer so that the output of one layer becomes an input to the other layer.
More recent solutions include:
- Varuna
- Sagemaker
We have not experimented with Varuna and SageMaker but their papers report that they have overcome the list of problems | 36_6_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_gpu_many/#from-naive-model-parallelism-to-pipeline-parallelism | .md | We have not experimented with Varuna and SageMaker but their papers report that they have overcome the list of problems
mentioned above and that they require smaller changes to the user's model.
Implementations:
- [PyTorch](https://pytorch.org/docs/stable/pipeline.html) (initial support in pytorch-1.8, and progressively getting improved in 1.9 and more so in 1.10). Some [examples](https://github.com/pytorch/pytorch/blob/master/benchmarks/distributed/pipeline/pipe.py) | 36_6_19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.