source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deployment | .md | --source_lang en --target_lang ro
```
</hfoption>
<hfoption id="single-GPU">
To deploy DeepSpeed on a single GPU, add the `--num_gpus` parameter. It isn't necessary to explicitly set this value if you only have 1 GPU because DeepSpeed deploys all GPUs it can see on a given node.
```bash
deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero2.json \
--model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \ | 41_18_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deployment | .md | --deepspeed tests/deepspeed/ds_config_zero2.json \
--model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
DeepSpeed is still useful with just 1 GPU because you can: | 41_18_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deployment | .md | --source_lang en --target_lang ro
```
DeepSpeed is still useful with just 1 GPU because you can:
1. Offload some computations and memory to the CPU to make more GPU resources available to your model to use a larger batch size or fit a very large model that normally won't fit.
2. Minimize memory fragmentation with it's smart GPU memory management system which also allows you to fit bigger models and data batches.
<Tip> | 41_18_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deployment | .md | <Tip>
Set the `allgather_bucket_size` and `reduce_bucket_size` values to 2e8 in the [ZeRO-2](#zero-configuration) configuration file to get better performance on a single GPU.
</Tip>
</hfoption>
</hfoptions> | 41_18_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#multi-node-deployment | .md | A node is one or more GPUs for running a workload. A more powerful setup is a multi-node setup which can be launched with the `deepspeed` launcher. For this guide, let's assume there are two nodes with 8 GPUs each. The first node can be accessed `ssh hostname1` and the second node with `ssh hostname2`. Both nodes must be able to communicate with each other locally over ssh without a password. | 41_19_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#multi-node-deployment | .md | By default, DeepSpeed expects your multi-node environment to use a shared storage. If this is not the case and each node can only see the local filesystem, you need to adjust the config file to include a [`checkpoint`](https://www.deepspeed.ai/docs/config-json/#checkpoint-options) to allow loading without access to a shared filesystem:
```yaml
{
"checkpoint": {
"use_node_local_storage": true
}
}
``` | 41_19_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#multi-node-deployment | .md | ```yaml
{
"checkpoint": {
"use_node_local_storage": true
}
}
```
You could also use the [`Trainer`]'s `--save_on_each_node` argument to automatically add the above `checkpoint` to your config.
<hfoptions id="multinode">
<hfoption id="torchrun">
For [torchrun](https://pytorch.org/docs/stable/elastic/run.html), you have to ssh to each node and run the following command on both of them. The launcher waits until both nodes are synchronized before launching the training.
```bash | 41_19_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#multi-node-deployment | .md | ```bash
torchrun --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \
--master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json
```
</hfoption>
<hfoption id="deepspeed">
For the `deepspeed` launcher, start by creating a `hostfile`.
```bash
hostname1 slots=8
hostname2 slots=8
```
Then you can launch the training with the following command. The `deepspeed` launcher automatically launches the command on both nodes at once.
```bash | 41_19_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#multi-node-deployment | .md | ```bash
deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \
your_program.py <normal cl args> --deepspeed ds_config.json
```
Check out the [Resource Configuration (multi-node)](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) guide for more details about configuring multi-node compute resources.
</hfoption>
</hfoptions> | 41_19_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#slurm | .md | In a SLURM environment, you'll need to adapt your SLURM script to your specific SLURM environment. An example SLURM script may look like:
```bash
#SBATCH --job-name=test-nodes # name
#SBATCH --nodes=2 # nodes
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
#SBATCH --cpus-per-task=10 # number of cores per tasks
#SBATCH --gres=gpu:8 # number of gpus
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS) | 41_20_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#slurm | .md | #SBATCH --gres=gpu:8 # number of gpus
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
#SBATCH --output=%x-%j.out # output file name | 41_20_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#slurm | .md | export GPUS_PER_NODE=8
export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
export MASTER_PORT=9901 | 41_20_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#slurm | .md | srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \
--nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \
--master_addr $MASTER_ADDR --master_port $MASTER_PORT \
your_program.py <normal cl args> --deepspeed ds_config.json'
```
Then you can schedule your multi-node deployment with the following command which launches training simultaneously on all nodes.
```bash
sbatch launch.slurm
``` | 41_20_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#notebook | .md | The `deepspeed` launcher doesn't support deployment from a notebook so you'll need to emulate the distributed environment. However, this only works for 1 GPU. If you want to use more than 1 GPU, you must use a multi-process environment for DeepSpeed to work. This means you have to use the `deepspeed` launcher which can't be emulated as shown here.
```py
# DeepSpeed requires a distributed environment even when only one process is used.
# This emulates a launcher in the notebook
import os | 41_21_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#notebook | .md | os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "9994" # modify if RuntimeError: Address already in use
os.environ["RANK"] = "0"
os.environ["LOCAL_RANK"] = "0"
os.environ["WORLD_SIZE"] = "1" | 41_21_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#notebook | .md | # Now proceed as normal, plus pass the DeepSpeed config file
training_args = TrainingArguments(..., deepspeed="ds_config_zero3.json")
trainer = Trainer(...)
trainer.train()
```
If you want to create the config file on the fly in the notebook in the current directory, you could have a dedicated cell.
```py
%%bash
cat <<'EOT' > ds_config_zero3.json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}, | 41_21_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#notebook | .md | "optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}, | 41_21_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#notebook | .md | "zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}, | 41_21_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#notebook | .md | "gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
EOT
```
If the training script is in a file and not in a notebook cell, you can launch `deepspeed` normally from the shell in a notebook cell. For example, to launch `run_translation.py`:
```py
!git clone https://github.com/huggingface/transformers | 41_21_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#notebook | .md | ```py
!git clone https://github.com/huggingface/transformers
!cd transformers; deepspeed examples/pytorch/translation/run_translation.py ...
```
You could also use `%%bash` magic and write multi-line code to run the shell program, but you won't be able to view the logs until training is complete. With `%%bash` magic, you don't need to emulate a distributed environment.
```py
%%bash | 41_21_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#notebook | .md | git clone https://github.com/huggingface/transformers
cd transformers
deepspeed examples/pytorch/translation/run_translation.py ...
``` | 41_21_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#save-model-weights | .md | DeepSpeed stores the main full precision fp32 weights in custom checkpoint optimizer files (the glob pattern looks like `global_step*/*optim_states.pt`) and are saved under the normal checkpoint.
<hfoptions id="save">
<hfoption id="fp16"> | 41_22_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#save-model-weights | .md | A model trained with ZeRO-2 saves the pytorch_model.bin weights in fp16. To save the model weights in fp16 for a model trained with ZeRO-3, you need to set `"stage3_gather_16bit_weights_on_model_save": true` because the model weights are partitioned across multiple GPUs. Otherwise, the [`Trainer`] won't save the weights in fp16 and it won't create a pytorch_model.bin file. This is because DeepSpeed's state_dict contains a placeholder instead of the real weights and you won't be able to load them.
```yaml | 41_22_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#save-model-weights | .md | ```yaml
{
"zero_optimization": {
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
</hfoption>
<hfoption id="fp32">
The full precision weights shouldn't be saved during training because it can require a lot of memory. It is usually best to save the fp32 weights offline after training is complete. But if you have a lot of free CPU memory, it is possible to save the fp32 weights during training. This section covers both online and offline approaches. | 41_22_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#online | .md | You must have saved at least one checkpoint to load the latest checkpoint as shown in the following:
```py
from transformers.trainer_utils import get_last_checkpoint
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint | 41_23_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#online | .md | checkpoint_dir = get_last_checkpoint(trainer.args.output_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
If you've enabled the `--load_best_model_at_end` parameter to track the best checkpoint in [`TrainingArguments`], you can finish training first and save the final model explicitly. Then you can reload it as shown below:
```py
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint | 41_23_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#online | .md | checkpoint_dir = os.path.join(trainer.args.output_dir, "checkpoint-final")
trainer.deepspeed.save_checkpoint(checkpoint_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
<Tip> | 41_23_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#online | .md | fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
<Tip>
Once `load_state_dict_from_zero_checkpoint` is run, the model is no longer usable in DeepSpeed in the context of the same application. You'll need to initialize the DeepSpeed engine again since `model.load_state_dict(state_dict)` removes all the DeepSpeed magic from it. Only use this at the very end of training.
</Tip>
You can also extract and load the state_dict of the fp32 weights:
```py | 41_23_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#online | .md | </Tip>
You can also extract and load the state_dict of the fp32 weights:
```py
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint | 41_23_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#online | .md | state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
model = model.cpu()
model.load_state_dict(state_dict)
``` | 41_23_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#offline | .md | DeepSpeed provides a zero_to_fp32.py script at the top-level of the checkpoint folder for extracting weights at any point. This is a standalone script and you don't need a configuration file or [`Trainer`].
For example, if your checkpoint folder looked like this:
```bash
$ ls -l output_dir/checkpoint-1/
-rw-rw-r-- 1 stas stas 1.4K Mar 27 20:42 config.json
drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/
-rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest | 41_24_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#offline | .md | drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/
-rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest
-rw-rw-r-- 1 stas stas 827K Mar 27 20:42 optimizer.pt
-rw-rw-r-- 1 stas stas 231M Mar 27 20:42 pytorch_model.bin
-rw-rw-r-- 1 stas stas 623 Mar 27 20:42 scheduler.pt
-rw-rw-r-- 1 stas stas 1.8K Mar 27 20:42 special_tokens_map.json
-rw-rw-r-- 1 stas stas 774K Mar 27 20:42 spiece.model
-rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json
-rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json | 41_24_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#offline | .md | -rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json
-rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json
-rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin
-rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py*
``` | 41_24_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#offline | .md | -rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin
-rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py*
```
To reconstruct the fp32 weights from the DeepSpeed checkpoint (ZeRO-2 or ZeRO-3) subfolder `global_step1`, run the following command to create and consolidate the full fp32 weights from multiple GPUs into a single pytorch_model.bin file. The script automatically discovers the subfolder containing the checkpoint.
```py
python zero_to_fp32.py . pytorch_model.bin
```
<Tip> | 41_24_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#offline | .md | ```py
python zero_to_fp32.py . pytorch_model.bin
```
<Tip>
Run `python zero_to_fp32.py -h` for more usage details. The script requires 2x the general RAM of the final fp32 weights.
</Tip>
</hfoption>
</hfoptions> | 41_24_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-inference | .md | [ZeRO Inference](https://www.deepspeed.ai/2022/09/09/zero-inference.html) places the model weights in CPU or NVMe memory to avoid burdening the GPU which makes it possible to run inference with huge models on a GPU. Inference doesn't require any large additional amounts of memory for the optimizer states and gradients so you can fit much larger batches and/or sequence lengths on the same hardware. | 41_25_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#zero-inference | .md | ZeRO Inference shares the same configuration file as [ZeRO-3](#zero-configuration), and ZeRO-2 and ZeRO-1 configs won't work because they don't provide any benefits for inference.
To run ZeRO Inference, pass your usual training arguments to the [`TrainingArguments`] class and add the `--do_eval` argument.
```bash
deepspeed --num_gpus=2 your_program.py <normal cl args> --do_eval --deepspeed ds_config.json
``` | 41_25_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-deepspeed-integration | .md | DeepSpeed also works with Transformers without the [`Trainer`] class. This is handled by the [`HfDeepSpeedConfig`] which only takes care of gathering ZeRO-3 parameters and splitting a model across multiple GPUs when you call [`~PreTrainedModel.from_pretrained`].
<Tip> | 41_26_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-deepspeed-integration | .md | <Tip>
If you want everything automatically taken care of for you, try using DeepSpeed with the [`Trainer`]! You'll need to follow the [DeepSpeed documentation](https://www.deepspeed.ai/), and manually configure the parameter values in the config file (you can't use the `"auto"` value).
</Tip>
To efficiently deploy ZeRO-3, you must instantiate the [`HfDeepSpeedConfig`] object before the model and keep that object alive:
<hfoptions id="models">
<hfoption id="pretrained model">
```py | 41_26_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-deepspeed-integration | .md | <hfoptions id="models">
<hfoption id="pretrained model">
```py
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel
import deepspeed | 41_26_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-deepspeed-integration | .md | ds_config = {...} # deepspeed config object or path to the file
# must run before instantiating the model to detect zero 3
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
model = AutoModel.from_pretrained("openai-community/gpt2")
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
```
</hfoption>
<hfoption id="non-pretrained model">
[`HfDeepSpeedConfig`] is not required for ZeRO-1 or ZeRO-2.
```py
from transformers.integrations import HfDeepSpeedConfig | 41_26_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-deepspeed-integration | .md | [`HfDeepSpeedConfig`] is not required for ZeRO-1 or ZeRO-2.
```py
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel, AutoConfig
import deepspeed | 41_26_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-deepspeed-integration | .md | ds_config = {...} # deepspeed config object or path to the file
# must run before instantiating the model to detect zero 3
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
config = AutoConfig.from_pretrained("openai-community/gpt2")
model = AutoModel.from_config(config)
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
```
</hfoption>
</hfoptions> | 41_26_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | To run ZeRO Inference without the [`Trainer`] in cases where you canβt fit a model onto a single GPU, try using additional GPUs or/and offloading to CPU memory. The important nuance to understand here is that the way ZeRO is designed, you can process different inputs on different GPUs in parallel.
Make sure to:
* disable CPU offload if you have enough GPU memory (since it slows things down). | 41_27_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | Make sure to:
* disable CPU offload if you have enough GPU memory (since it slows things down).
* enable bf16 if you have an Ampere or newer GPU to make things faster. If you donβt have one of these GPUs, you may enable fp16 as long as you donβt use a model pretrained in bf16 (T5 models) because it may lead to an overflow error.
Take a look at the following script to get a better idea of how to run ZeRO Inference without the [`Trainer`] on a model that won't fit on a single GPU.
```py | 41_27_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | ```py
#!/usr/bin/env python | 41_27_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | # This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model
# into a single GPU
#
# 1. Use 1 GPU with CPU offload
# 2. Or use multiple GPUs instead
#
# First you need to install deepspeed: pip install deepspeed
#
# Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2
# small GPUs can handle it. or 1 small GPU and a lot of CPU memory.
# | 41_27_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | # small GPUs can handle it. or 1 small GPU and a lot of CPU memory.
#
# To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU -
# you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to
# process multiple inputs at once.
#
# The provided deepspeed config also activates CPU memory offloading, so chances are that if you
# have a lot of available CPU memory and you don't mind a slowdown you should be able to load a | 41_27_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | # have a lot of available CPU memory and you don't mind a slowdown you should be able to load a
# model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will
# run faster if you don't want offload to CPU - so disable that section then.
#
# To deploy on 1 gpu:
#
# deepspeed --num_gpus 1 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=1 t0.py
#
# To deploy on 2 gpus:
#
# deepspeed --num_gpus 2 t0.py
# or: | 41_27_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | # python -m torch.distributed.run --nproc_per_node=1 t0.py
#
# To deploy on 2 gpus:
#
# deepspeed --num_gpus 2 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=2 t0.py | 41_27_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM
from transformers.integrations import HfDeepSpeedConfig
import deepspeed
import os
import torch
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers
# distributed setup
local_rank = int(os.getenv("LOCAL_RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "1"))
torch.cuda.set_device(local_rank)
deepspeed.init_distributed()
model_name = "bigscience/T0_3B" | 41_27_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | model_name = "bigscience/T0_3B"
config = AutoConfig.from_pretrained(model_name)
model_hidden_size = config.d_model
# batch size has to be divisible by world_size, but can be bigger than world_size
train_batch_size = 1 * world_size | 41_27_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | # ds_config notes
#
# - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be
# faster.
#
# - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g.
# all official t5 models are bf16-pretrained
#
# - set offload_param.device to "none" or completely remove the `offload_param` section if you don't
# - want CPU offload
#
# - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control | 41_27_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | # - want CPU offload
#
# - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control
# - which params should remain on gpus - the larger the value the smaller the offload size
#
# For in-depth info on Deepspeed config see
# https://huggingface.co/docs/transformers/main/main_classes/deepspeed | 41_27_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | # keeping the same format as json for consistency, except it uses lower case for true/false
# fmt: off
ds_config = {
"fp16": {
"enabled": False
},
"bf16": {
"enabled": False
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "cpu",
"pin_memory": True
},
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size, | 41_27_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | "stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
"train_batch_size": train_batch_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
# fmt: on | 41_27_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | # next line instructs transformers to partition the model directly over multiple gpus using
# deepspeed.zero.Init when model's `from_pretrained` method is called.
#
# **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)**
#
# otherwise the model will first be loaded normally and only partitioned at forward time which is
# less efficient and when there is little CPU RAM may fail
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive | 41_27_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | # now a model can be loaded.
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# initialise Deepspeed ZeRO and store only the engine object
ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0]
ds_engine.module.eval() # inference | 41_27_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | # Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once.
# If you use more GPUs adjust for more.
# And of course if you have just one input to process you then need to pass the same string to both gpus
# If you use only one GPU, then you will have only rank 0.
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1: | 41_27_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever" | 41_27_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```
Save the script as t0.py and launch it:
```bash
$ deepspeed --num_gpus 2 t0.py
rank0: | 41_27_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#non-trainer-zero-inference | .md | ```
Save the script as t0.py and launch it:
```bash
$ deepspeed --num_gpus 2 t0.py
rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
out=Positive
rank1:
in=Is this review positive or negative? Review: this is the worst restaurant ever
out=negative
```
This is a very basic example and you'll want to adapt it to your use case. | 41_27_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#generate | .md | Using multiple GPUs with ZeRO-3 for generation requires synchronizing the GPUs by setting `synced_gpus=True` in the [`~GenerationMixin.generate`] method. Otherwise, if one GPU is finished generating before another one, the whole system hangs because the remaining GPUs haven't received the weight shard from the GPU that finished first.
For Transformers>=4.28, if `synced_gpus` is automatically set to `True` if multiple GPUs are detected during generation. | 41_28_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#troubleshoot | .md | When you encounter an issue, you should consider whether DeepSpeed is the cause of the problem because often it isn't (unless it's super obviously and you can see DeepSpeed modules in the exception)! The first step should be to retry your setup without DeepSpeed, and if the problem persists, then you can report the issue. If the issue is a core DeepSpeed problem and unrelated to the Transformers integration, open an Issue on the [DeepSpeed repository](https://github.com/microsoft/DeepSpeed). | 41_29_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#troubleshoot | .md | For issues related to the Transformers integration, please provide the following information:
* the full DeepSpeed config file
* the command line arguments of the [`Trainer`], or [`TrainingArguments`] arguments if you're scripting the [`Trainer`] setup yourself (don't dump the [`TrainingArguments`] which has dozens of irrelevant entries)
* the outputs of:
```bash
python -c 'import torch; print(f"torch: {torch.__version__}")' | 41_29_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#troubleshoot | .md | * the outputs of:
```bash
python -c 'import torch; print(f"torch: {torch.__version__}")'
python -c 'import transformers; print(f"transformers: {transformers.__version__}")'
python -c 'import deepspeed; print(f"deepspeed: {deepspeed.__version__}")'
```
* a link to a Google Colab notebook to reproduce the issue
* if impossible, a standard and non-custom dataset we can use and also try to use an existing example to reproduce the issue with | 41_29_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#troubleshoot | .md | The following sections provide a guide for resolving two of the most common issues. | 41_29_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed-process-killed-at-startup | .md | When the DeepSpeed process is killed during launch without a traceback, that usually means the program tried to allocate more CPU memory than your system has or your process tried to allocate more CPU memory than allowed leading the OS kernel to terminate the process. In this case, check whether your configuration file has either `offload_optimizer`, `offload_param` or both configured to offload to the CPU. | 41_30_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#deepspeed-process-killed-at-startup | .md | If you have NVMe and ZeRO-3 setup, experiment with offloading to the NVMe ([estimate](https://deepspeed.readthedocs.io/en/latest/memory.html) the memory requirements for your model). | 41_30_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nan-loss | .md | NaN loss often occurs when a model is pretrained in bf16 and then you try to use it with fp16 (especially relevant for TPU trained models). To resolve this, use fp32 or bf16 if your hardware supports it (TPU, Ampere GPUs or newer).
The other issue may be related to using fp16. For example, if this is your fp16 configuration:
```yaml
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
``` | 41_31_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nan-loss | .md | "loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
You might see the following `OVERFLOW!` messages in the logs:
```bash
0%| | 0/189 [00:00<?, ?it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144 | 41_31_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nan-loss | .md | [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144
1%|β | 1/189 [00:00<01:26, 2.17it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 131072.0
1%|ββ
[...]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 | 41_31_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nan-loss | .md | 1%|ββ
[...]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
14%|βββββββββββββββββ | 27/189 [00:14<01:13, 2.21it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15%|ββββββββββββββββββ | 28/189 [00:14<01:13, 2.18it/s] | 41_31_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#nan-loss | .md | [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15%|ββββββββββββββββββ | 29/189 [00:15<01:13, 2.18it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
[...]
```
This means the DeepSpeed loss scaler is unable to find a scaling coefficient to overcome loss overflow. To fix it, try a higher `initial_scale_power` value (32 usually works). | 41_31_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#resources | .md | DeepSpeed ZeRO is a powerful technology for training and loading very large models for inference with limited GPU resources, making it more accessible to everyone. To learn more about DeepSpeed, feel free to read the [blog posts](https://www.microsoft.com/en-us/research/search/?q=deepspeed), [documentation](https://www.deepspeed.ai/getting-started/), and [GitHub repository](https://github.com/microsoft/deepspeed).
The following papers are also a great resource for learning more about ZeRO: | 41_32_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/deepspeed.md | https://huggingface.co/docs/transformers/en/deepspeed/#resources | .md | The following papers are also a great resource for learning more about ZeRO:
* [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://hf.co/papers/1910.02054)
* [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://hf.co/papers/2101.06840)
* [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://hf.co/papers/2104.07857) | 41_32_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 42_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
β οΈ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 42_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#-transformers | .md | State-of-the-art Machine Learning for [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), and [JAX](https://jax.readthedocs.io/en/latest/).
π€ Transformers provides APIs and tools to easily download and train state-of-the-art pretrained models. Using pretrained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. These models support common tasks in different modalities, such as: | 42_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#-transformers | .md | π **Natural Language Processing**: text classification, named entity recognition, question answering, language modeling, code generation, summarization, translation, multiple choice, and text generation.<br>
πΌοΈ **Computer Vision**: image classification, object detection, and segmentation.<br>
π£οΈ **Audio**: automatic speech recognition and audio classification.<br> | 42_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#-transformers | .md | π£οΈ **Audio**: automatic speech recognition and audio classification.<br>
π **Multimodal**: table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering. | 42_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#-transformers | .md | π€ Transformers support framework interoperability between PyTorch, TensorFlow, and JAX. This provides the flexibility to use a different framework at each stage of a model's life; train a model in three lines of code in one framework, and load it for inference in another. Models can also be exported to a format like ONNX and TorchScript for deployment in production environments. | 42_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#-transformers | .md | Join the growing community on the [Hub](https://huggingface.co/models), [forum](https://discuss.huggingface.co/), or [Discord](https://discord.com/invite/JfAtkvEtRb) today! | 42_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#if-you-are-looking-for-custom-support-from-the-hugging-face-team | .md | <a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a> | 42_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#contents | .md | The documentation is organized into five sections:
- **GET STARTED** provides a quick tour of the library and installation instructions to get up and running.
- **TUTORIALS** are a great place to start if you're a beginner. This section will help you gain the basic skills you need to start using the library.
- **HOW-TO GUIDES** show you how to achieve a specific goal, like finetuning a pretrained model for language modeling or how to write and share a custom model. | 42_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#contents | .md | - **CONCEPTUAL GUIDES** offers more discussion and explanation of the underlying concepts and ideas behind models, tasks, and the design philosophy of π€ Transformers.
- **API** describes all classes and functions:
- **MAIN CLASSES** details the most important classes like configuration, model, tokenizer, and pipeline.
- **MODELS** details the classes and functions related to each model implemented in the library.
- **INTERNAL HELPERS** details utility classes and functions used internally. | 42_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | The table below represents the current support in the library for each of those models, whether they have a Python
tokenizer (called "slow"). A "fast" tokenizer backed by the π€ Tokenizers library, whether they have support in Jax (via
Flax), PyTorch, and/or TensorFlow.
<!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | 42_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!-->
| Model | PyTorch support | TensorFlow support | Flax Support |
|:------------------------------------------------------------------------:|:---------------:|:------------------:|:------------:|
| [ALBERT](model_doc/albert) | β
| β
| β
| | 42_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [ALBERT](model_doc/albert) | β
| β
| β
|
| [ALIGN](model_doc/align) | β
| β | β |
| [AltCLIP](model_doc/altclip) | β
| β | β |
| [Aria](model_doc/aria) | β
| β | β | | 42_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [Aria](model_doc/aria) | β
| β | β |
| [AriaText](model_doc/aria_text) | β
| β | β |
| [Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer) | β
| β | β |
| [Autoformer](model_doc/autoformer) | β
| β | β | | 42_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [Autoformer](model_doc/autoformer) | β
| β | β |
| [Bamba](model_doc/bamba) | β
| β | β |
| [Bark](model_doc/bark) | β
| β | β |
| [BART](model_doc/bart) | β
| β
| β
| | 42_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [BART](model_doc/bart) | β
| β
| β
|
| [BARThez](model_doc/barthez) | β
| β
| β
|
| [BARTpho](model_doc/bartpho) | β
| β
| β
|
| [BEiT](model_doc/beit) | β
| β | β
| | 42_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [BEiT](model_doc/beit) | β
| β | β
|
| [BERT](model_doc/bert) | β
| β
| β
|
| [Bert Generation](model_doc/bert-generation) | β
| β | β |
| [BertJapanese](model_doc/bert-japanese) | β
| β
| β
| | 42_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [BertJapanese](model_doc/bert-japanese) | β
| β
| β
|
| [BERTweet](model_doc/bertweet) | β
| β
| β
|
| [BigBird](model_doc/big_bird) | β
| β | β
|
| [BigBird-Pegasus](model_doc/bigbird_pegasus) | β
| β | β | | 42_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [BigBird-Pegasus](model_doc/bigbird_pegasus) | β
| β | β |
| [BioGpt](model_doc/biogpt) | β
| β | β |
| [BiT](model_doc/bit) | β
| β | β |
| [Blenderbot](model_doc/blenderbot) | β
| β
| β
| | 42_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [Blenderbot](model_doc/blenderbot) | β
| β
| β
|
| [BlenderbotSmall](model_doc/blenderbot-small) | β
| β
| β
|
| [BLIP](model_doc/blip) | β
| β
| β |
| [BLIP-2](model_doc/blip-2) | β
| β | β | | 42_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [BLIP-2](model_doc/blip-2) | β
| β | β |
| [BLOOM](model_doc/bloom) | β
| β | β
|
| [BORT](model_doc/bort) | β
| β
| β
|
| [BridgeTower](model_doc/bridgetower) | β
| β | β | | 42_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [BridgeTower](model_doc/bridgetower) | β
| β | β |
| [BROS](model_doc/bros) | β
| β | β |
| [ByT5](model_doc/byt5) | β
| β
| β
|
| [CamemBERT](model_doc/camembert) | β
| β
| β | | 42_4_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [CamemBERT](model_doc/camembert) | β
| β
| β |
| [CANINE](model_doc/canine) | β
| β | β |
| [Chameleon](model_doc/chameleon) | β
| β | β |
| [Chinese-CLIP](model_doc/chinese_clip) | β
| β | β | | 42_4_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/index.md | https://huggingface.co/docs/transformers/en/index/#supported-models-and-frameworks | .md | | [Chinese-CLIP](model_doc/chinese_clip) | β
| β | β |
| [CLAP](model_doc/clap) | β
| β | β |
| [CLIP](model_doc/clip) | β
| β
| β
|
| [CLIPSeg](model_doc/clipseg) | β
| β | β | | 42_4_13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.