source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
30_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
<Tip warning={true}> This is experimental and the API may change in the future. </Tip>
30_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
[LoRA (Low-Rank Adaptation of Large Language Models)](https://hf.co/papers/2106.09685) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like
30_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training.
30_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
<Tip>
30_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
LoRA is very versatile and supported for [DreamBooth](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py), [Kandinsky 2.2](https://github.com/huggingface/diffusers/blob/main/examples/kandinsky2_2/text_to_image/train_text_to_image_lora_decoder.py), [Stable Diffusion XL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py),
30_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
Diffusion XL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py), [text-to-image](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py), and [Wuerstchen](https://github.com/huggingface/diffusers/blob/main/examples/wuerstchen/text_to_image/train_text_to_image_lora_prior.py).
30_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
</Tip> This guide will explore the [train_text_to_image_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ```
30_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Navigate to the example folder with the training script and install the required dependencies for the script you're using: <hfoptions id="installation"> <hfoption id="PyTorch"> ```bash cd examples/text_to_image pip install -r requirements.txt ``` </hfoption> <hfoption id="Flax"> ```bash cd examples/text_to_image pip install -r requirements_flax.txt ``` </hfoption> </hfoptions> <Tip>
30_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
```bash cd examples/text_to_image pip install -r requirements_flax.txt ``` </hfoption> </hfoptions> <Tip> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more. </Tip> Initialize an 🤗 Accelerate environment: ```bash accelerate config ```
30_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
</Tip> Initialize an 🤗 Accelerate environment: ```bash accelerate config ``` To setup a default 🤗 Accelerate environment without choosing any configurations: ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell, like a notebook, you can use: ```py from accelerate.utils import write_basic_config
30_1_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
write_basic_config() ``` Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. <Tip>
30_1_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#lora
.md
<Tip> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/text_to_image_lora.py) and let us know if you have any questions or concerns. </Tip>
30_1_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#script-parameters
.md
The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/dd9a5caf61f04d11c0fa9f3947b69ab0010c9a0f/examples/text_to_image/train_text_to_image_lora.py#L85) function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you'd like.
30_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#script-parameters
.md
For example, to increase the number of epochs to train: ```bash accelerate launch train_text_to_image_lora.py \ --num_train_epochs=150 \ ``` Many of the basic and important parameters are described in the [Text-to-image](text2image#script-parameters) training guide, so this guide just focuses on the LoRA relevant parameters: - `--rank`: the inner dimension of the low-rank matrices to train; a higher rank means more trainable parameters
30_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#script-parameters
.md
- `--rank`: the inner dimension of the low-rank matrices to train; a higher rank means more trainable parameters - `--learning_rate`: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate
30_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#training-script
.md
The dataset preprocessing code and training loop are found in the [`main()`](https://github.com/huggingface/diffusers/blob/dd9a5caf61f04d11c0fa9f3947b69ab0010c9a0f/examples/text_to_image/train_text_to_image_lora.py#L371) function, and if you need to adapt the training script, this is where you'll make your changes.
30_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#training-script
.md
As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the LoRA relevant parts of the script. <hfoptions id="lora"> <hfoption id="UNet">
30_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#training-script
.md
<hfoptions id="lora"> <hfoption id="UNet"> Diffusers uses [`~peft.LoraConfig`] from the [PEFT](https://hf.co/docs/peft) library to set up the parameters of the LoRA adapter such as the rank, alpha, and which modules to insert the LoRA weights into. The adapter is added to the UNet, and only the LoRA layers are filtered for optimization in `lora_layers`. ```py unet_lora_config = LoraConfig( r=args.rank, lora_alpha=args.rank, init_lora_weights="gaussian",
30_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#training-script
.md
```py unet_lora_config = LoraConfig( r=args.rank, lora_alpha=args.rank, init_lora_weights="gaussian", target_modules=["to_k", "to_q", "to_v", "to_out.0"], )
30_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#training-script
.md
unet.add_adapter(unet_lora_config) lora_layers = filter(lambda p: p.requires_grad, unet.parameters()) ``` </hfoption> <hfoption id="text encoder"> Diffusers also supports finetuning the text encoder with LoRA from the [PEFT](https://hf.co/docs/peft) library when necessary such as finetuning Stable Diffusion XL (SDXL). The [`~peft.LoraConfig`] is used to configure the parameters of the LoRA adapter which are then added to the text encoder, and only the LoRA layers are filtered for training. ```py
30_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#training-script
.md
```py text_lora_config = LoraConfig( r=args.rank, lora_alpha=args.rank, init_lora_weights="gaussian", target_modules=["q_proj", "k_proj", "v_proj", "out_proj"], )
30_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#training-script
.md
text_encoder_one.add_adapter(text_lora_config) text_encoder_two.add_adapter(text_lora_config) text_lora_parameters_one = list(filter(lambda p: p.requires_grad, text_encoder_one.parameters())) text_lora_parameters_two = list(filter(lambda p: p.requires_grad, text_encoder_two.parameters())) ``` </hfoption> </hfoptions>
30_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#training-script
.md
``` </hfoption> </hfoptions> The [optimizer](https://github.com/huggingface/diffusers/blob/e4b8f173b97731686e290b2eb98e7f5df2b1b322/examples/text_to_image/train_text_to_image_lora.py#L529) is initialized with the `lora_layers` because these are the only weights that'll be optimized: ```py optimizer = optimizer_cls( lora_layers, lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) ```
30_3_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#training-script
.md
betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) ``` Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py!
30_3_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#launch-the-script
.md
Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀
30_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#launch-the-script
.md
Let's train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Naruto characters. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and dataset respectively. You should also specify where to save the model in `OUTPUT_DIR`, and the name of the model to save to on the Hub with `HUB_MODEL_ID`. The script creates and saves the following files to your repository: - saved model checkpoints
30_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#launch-the-script
.md
- saved model checkpoints - `pytorch_lora_weights.safetensors` (the trained LoRA weights) If you're training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command. <Tip warning={true}> A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM. </Tip> ```bash export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="/sddata/finetune/lora/naruto" export HUB_MODEL_ID="naruto-lora"
30_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#launch-the-script
.md
export OUTPUT_DIR="/sddata/finetune/lora/naruto" export HUB_MODEL_ID="naruto-lora" export DATASET_NAME="lambdalabs/naruto-blip-captions"
30_4_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#launch-the-script
.md
accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$DATASET_NAME \ --dataloader_num_workers=8 \ --resolution=512 \ --center_crop \ --random_flip \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --max_train_steps=15000 \ --learning_rate=1e-04 \ --max_grad_norm=1 \ --lr_scheduler="cosine" \ --lr_warmup_steps=0 \ --output_dir=${OUTPUT_DIR} \ --push_to_hub \ --hub_model_id=${HUB_MODEL_ID} \ --report_to=wandb \
30_4_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#launch-the-script
.md
--lr_warmup_steps=0 \ --output_dir=${OUTPUT_DIR} \ --push_to_hub \ --hub_model_id=${HUB_MODEL_ID} \ --report_to=wandb \ --checkpointing_steps=500 \ --validation_prompt="A naruto with blue eyes." \ --seed=1337 ``` Once training has been completed, you can use your model for inference: ```py from diffusers import AutoPipelineForText2Image import torch
30_4_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#launch-the-script
.md
pipeline = AutoPipelineForText2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors") image = pipeline("A naruto with blue eyes").images[0] ```
30_4_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md
https://huggingface.co/docs/diffusers/en/training/lora/#next-steps
.md
Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful: - Learn how to [load different LoRA formats](../using-diffusers/loading_adapters#LoRA) trained using community trainers like Kohya and TheLastBen. - Learn how to use and [combine multiple LoRA's](../tutorials/using_peft_for_inference) with PEFT for inference.
30_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
31_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
31_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
[ControlNet](https://hf.co/papers/2302.05543) models are adapters trained on top of another pretrained model. It allows for a greater degree of control over image generation by conditioning the model with an additional input image. The input image can be a canny edge, depth map, human pose, and many more.
31_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
If you're training on a GPU with limited vRAM, you should try enabling the `gradient_checkpointing`, `gradient_accumulation_steps`, and `mixed_precision` parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with [xFormers](../optimization/xformers). JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn't support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train
31_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
but it doesn't support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax.
31_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
This guide will explore the [train_controlnet.py](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet.py) training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ```
31_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then navigate to the example folder containing the training script and install the required dependencies for the script you're using: <hfoptions id="installation"> <hfoption id="PyTorch"> ```bash cd examples/controlnet pip install -r requirements.txt ``` </hfoption> <hfoption id="Flax">
31_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
<hfoption id="PyTorch"> ```bash cd examples/controlnet pip install -r requirements.txt ``` </hfoption> <hfoption id="Flax"> If you have access to a TPU, the Flax training script runs even faster! Let's run the training script on the [Google Cloud TPU VM](https://cloud.google.com/tpu/docs/run-calculation-jax). Create a single TPU v4-8 VM and connect to it: ```bash ZONE=us-central2-b TPU_TYPE=v4-8 VM_NAME=hg_flax
31_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
gcloud alpha compute tpus tpu-vm create $VM_NAME \ --zone $ZONE \ --accelerator-type $TPU_TYPE \ --version tpu-vm-v4-base
31_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
gcloud alpha compute tpus tpu-vm ssh $VM_NAME --zone $ZONE -- \ ``` Install JAX 0.4.5: ```bash pip install "jax[tpu]==0.4.5" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html ``` Then install the required dependencies for the Flax script: ```bash cd examples/controlnet pip install -r requirements_flax.txt ``` </hfoption> </hfoptions> <Tip>
31_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
```bash cd examples/controlnet pip install -r requirements_flax.txt ``` </hfoption> </hfoptions> <Tip> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more. </Tip> Initialize an 🤗 Accelerate environment: ```bash accelerate config ```
31_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
</Tip> Initialize an 🤗 Accelerate environment: ```bash accelerate config ``` To setup a default 🤗 Accelerate environment without choosing any configurations: ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell, like a notebook, you can use: ```py from accelerate.utils import write_basic_config
31_1_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
write_basic_config() ``` Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. <Tip>
31_1_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#controlnet
.md
<Tip> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet.py) and let us know if you have any questions or concerns. </Tip>
31_1_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#script-parameters
.md
The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/controlnet/train_controlnet.py#L231) function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.
31_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#script-parameters
.md
For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command: ```bash accelerate launch train_controlnet.py \ --mixed_precision="fp16" ``` Many of the basic and important parameters are described in the [Text-to-image](text2image#script-parameters) training guide, so this guide just focuses on the relevant parameters for ControlNet:
31_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#script-parameters
.md
- `--max_train_samples`: the number of training samples; this can be lowered for faster training, but if you want to stream really large datasets, you'll need to include this parameter and the `--streaming` parameter in your training command - `--gradient_accumulation_steps`: number of update steps to accumulate before the backward pass; this allows you to train with a bigger batch size than your GPU memory can typically handle
31_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#min-snr-weighting
.md
The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the `--snr_gamma` parameter and set it to the recommended value of 5.0: ```bash accelerate launch train_controlnet.py \
31_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#min-snr-weighting
.md
Add the `--snr_gamma` parameter and set it to the recommended value of 5.0: ```bash accelerate launch train_controlnet.py \ --snr_gamma=5.0 ```
31_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#training-script
.md
As with the script parameters, a general walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the relevant parts of the ControlNet script.
31_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#training-script
.md
The training script has a [`make_train_dataset`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/controlnet/train_controlnet.py#L582) function for preprocessing the dataset with image transforms and caption tokenization. You'll see that in addition to the usual caption tokenization and image transforms, the script also includes transforms for the conditioning image. <Tip>
31_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#training-script
.md
<Tip> If you're streaming a dataset on a TPU, performance may be bottlenecked by the 🤗 Datasets library which is not optimized for images. To ensure maximum throughput, you're encouraged to explore other dataset formats like [WebDataset](https://webdataset.github.io/webdataset/), [TorchData](https://github.com/pytorch/data), and [TensorFlow Datasets](https://www.tensorflow.org/datasets/tfless_tfds). </Tip> ```py conditioning_image_transforms = transforms.Compose( [
31_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#training-script
.md
</Tip> ```py conditioning_image_transforms = transforms.Compose( [ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), transforms.CenterCrop(args.resolution), transforms.ToTensor(), ] ) ```
31_4_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#training-script
.md
transforms.CenterCrop(args.resolution), transforms.ToTensor(), ] ) ``` Within the [`main()`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/controlnet/train_controlnet.py#L713) function, you'll find the code for loading the tokenizer, text encoder, scheduler and models. This is also where the ControlNet model is loaded either from existing weights or randomly initialized from a UNet: ```py if args.controlnet_model_name_or_path:
31_4_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#training-script
.md
```py if args.controlnet_model_name_or_path: logger.info("Loading existing controlnet weights") controlnet = ControlNetModel.from_pretrained(args.controlnet_model_name_or_path) else: logger.info("Initializing controlnet weights from unet") controlnet = ControlNetModel.from_unet(unet) ``` The [optimizer](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/controlnet/train_controlnet.py#L871) is set up to update the ControlNet parameters: ```py
31_4_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#training-script
.md
```py params_to_optimize = controlnet.parameters() optimizer = optimizer_class( params_to_optimize, lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) ``` Finally, in the [training loop](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/controlnet/train_controlnet.py#L943), the conditioning text embeddings and image are passed to the down and mid-blocks of the ControlNet model:
31_4_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#training-script
.md
```py encoder_hidden_states = text_encoder(batch["input_ids"])[0] controlnet_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype)
31_4_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#training-script
.md
down_block_res_samples, mid_block_res_sample = controlnet( noisy_latents, timesteps, encoder_hidden_states=encoder_hidden_states, controlnet_cond=controlnet_image, return_dict=False, ) ``` If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.
31_4_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
Now you're ready to launch the training script! 🚀 This guide uses the [fusing/fill50k](https://huggingface.co/datasets/fusing/fill50k) dataset, but remember, you can create and use your own dataset if you want (see the [Create a dataset for training](create_dataset) guide). Set the environment variable `MODEL_NAME` to a model id on the Hub or a path to a local model and `OUTPUT_DIR` to where you want to save the model. Download the following images to condition your training with: ```bash
31_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
Download the following images to condition your training with: ```bash wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png ```
31_5_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
``` One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train a ControlNet. The default configuration in this script requires ~38GB of vRAM. If you're training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command. <hfoptions id="gpu-select"> <hfoption id="16GB">
31_5_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
<hfoptions id="gpu-select"> <hfoption id="16GB"> On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to optimize your training run. Install bitsandbytes: ```py pip install bitsandbytes ``` Then, add the following parameter to your training command: ```bash accelerate launch train_controlnet.py \ --gradient_checkpointing \ --use_8bit_adam \ ``` </hfoption> <hfoption id="12GB">
31_5_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
accelerate launch train_controlnet.py \ --gradient_checkpointing \ --use_8bit_adam \ ``` </hfoption> <hfoption id="12GB"> On a 12GB GPU, you'll need bitsandbytes 8-bit optimizer, gradient checkpointing, xFormers, and set the gradients to `None` instead of zero to reduce your memory-usage. ```bash accelerate launch train_controlnet.py \ --use_8bit_adam \ --gradient_checkpointing \ --enable_xformers_memory_efficient_attention \ --set_grads_to_none \ ``` </hfoption> <hfoption id="8GB">
31_5_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
--enable_xformers_memory_efficient_attention \ --set_grads_to_none \ ``` </hfoption> <hfoption id="8GB"> On a 8GB GPU, you'll need to use [DeepSpeed](https://www.deepspeed.ai/) to offload some of the tensors from the vRAM to either the CPU or NVME to allow training with less GPU memory. Run the following command to configure your 🤗 Accelerate environment: ```bash accelerate config ```
31_5_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
``` During configuration, confirm that you want to use DeepSpeed stage 2. Now it should be possible to train on under 8GB vRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM (~25 GB). See the [DeepSpeed documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more configuration options. Your configuration file should look something like: ```bash
31_5_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
```bash compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 4 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: false zero_stage: 2 distributed_type: DEEPSPEED ```
31_5_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: false zero_stage: 2 distributed_type: DEEPSPEED ``` You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam [`deepspeed.ops.adam.DeepSpeedCPUAdam`](https://deepspeed.readthedocs.io/en/latest/optimizers.html#adam-cpu) for a substantial speedup. Enabling `DeepSpeedCPUAdam` requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch.
31_5_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
bitsandbytes 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. That's it! You don't need to add any additional parameters to your training command. </hfoption> </hfoptions> <hfoptions id="training-inference"> <hfoption id="PyTorch"> ```bash export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="path/to/save/model"
31_5_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
accelerate launch train_controlnet.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --resolution=512 \ --learning_rate=1e-5 \ --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --push_to_hub ``` </hfoption> <hfoption id="Flax">
31_5_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
--train_batch_size=1 \ --gradient_accumulation_steps=4 \ --push_to_hub ``` </hfoption> <hfoption id="Flax"> With Flax, you can [profile your code](https://jax.readthedocs.io/en/latest/profiling.html) by adding the `--profile_steps==5` parameter to your training command. Install the Tensorboard profile plugin: ```bash pip install tensorflow tensorboard-plugin-profile tensorboard --logdir runs/fill-circle-100steps-20230411_165612/ ```
31_5_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
```bash pip install tensorflow tensorboard-plugin-profile tensorboard --logdir runs/fill-circle-100steps-20230411_165612/ ``` Then you can inspect the profile at [http://localhost:6006/#profile](http://localhost:6006/#profile). <Tip warning={true}>
31_5_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
<Tip warning={true}> If you run into version conflicts with the plugin, try uninstalling and reinstalling all versions of TensorFlow and Tensorboard. The debugging functionality of the profile plugin is still experimental, and not all views are fully functional. The `trace_viewer` cuts off events after 1M, which can result in all your device traces getting lost if for example, you profile the compilation step by accident. </Tip> ```bash python3 train_controlnet_flax.py \
31_5_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
</Tip> ```bash python3 train_controlnet_flax.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --resolution=512 \ --learning_rate=1e-5 \ --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --validation_steps=1000 \ --train_batch_size=2 \ --revision="non-ema" \ --from_pt \ --report_to="wandb" \
31_5_14
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
--validation_steps=1000 \ --train_batch_size=2 \ --revision="non-ema" \ --from_pt \ --report_to="wandb" \ --tracker_project_name=$HUB_MODEL_ID \ --num_train_epochs=11 \ --push_to_hub \ --hub_model_id=$HUB_MODEL_ID ``` </hfoption> </hfoptions> Once training is complete, you can use your newly trained model for inference! ```py from diffusers import StableDiffusionControlNetPipeline, ControlNetModel from diffusers.utils import load_image import torch
31_5_15
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
controlnet = ControlNetModel.from_pretrained("path/to/controlnet", torch_dtype=torch.float16) pipeline = StableDiffusionControlNetPipeline.from_pretrained( "path/to/base/model", controlnet=controlnet, torch_dtype=torch.float16 ).to("cuda") control_image = load_image("./conditioning_image_1.png") prompt = "pale golden rod circle with old lace background"
31_5_16
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#launch-the-script
.md
control_image = load_image("./conditioning_image_1.png") prompt = "pale golden rod circle with old lace background" generator = torch.manual_seed(0) image = pipeline(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] image.save("./output.png") ```
31_5_17
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#stable-diffusion-xl
.md
Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the [`train_controlnet_sdxl.py`](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet_sdxl.py) script to train a ControlNet adapter for the SDXL model. The SDXL training script is discussed in more detail in the [SDXL training](sdxl) guide.
31_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/controlnet.md
https://huggingface.co/docs/diffusers/en/training/controlnet/#next-steps
.md
Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful: - Learn how to [use a ControlNet](../using-diffusers/controlnet) for inference on a variety of tasks.
31_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
32_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
32_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#textual-inversion
.md
[Textual Inversion](https://hf.co/papers/2208.01618) is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the example images you provide.
32_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#textual-inversion
.md
If you're training on a GPU with limited vRAM, you should try enabling the `gradient_checkpointing` and `mixed_precision` parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with [xFormers](../optimization/xformers). JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn't support gradient checkpointing or xFormers. With the same configuration and setup as PyTorch, the Flax training script should be at least
32_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#textual-inversion
.md
checkpointing or xFormers. With the same configuration and setup as PyTorch, the Flax training script should be at least ~70% faster!
32_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#textual-inversion
.md
This guide will explore the [textual_inversion.py](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py) script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ```
32_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#textual-inversion
.md
```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Navigate to the example folder with the training script and install the required dependencies for the script you're using: <hfoptions id="installation"> <hfoption id="PyTorch"> ```bash cd examples/textual_inversion pip install -r requirements.txt ``` </hfoption> <hfoption id="Flax"> ```bash cd examples/textual_inversion pip install -r requirements_flax.txt ``` </hfoption> </hfoptions> <Tip>
32_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#textual-inversion
.md
```bash cd examples/textual_inversion pip install -r requirements_flax.txt ``` </hfoption> </hfoptions> <Tip> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more. </Tip> Initialize an 🤗 Accelerate environment: ```bash accelerate config ```
32_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#textual-inversion
.md
</Tip> Initialize an 🤗 Accelerate environment: ```bash accelerate config ``` To setup a default 🤗 Accelerate environment without choosing any configurations: ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell, like a notebook, you can use: ```py from accelerate.utils import write_basic_config
32_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#textual-inversion
.md
write_basic_config() ``` Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. <Tip>
32_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#textual-inversion
.md
<Tip> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py) and let us know if you have any questions or concerns. </Tip>
32_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#script-parameters
.md
The training script has many parameters to help you tailor the training run to your needs. All of the parameters and their descriptions are listed in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/839c2a5ece0af4e75530cb520d77bc7ed8acf474/examples/textual_inversion/textual_inversion.py#L176) function. Where applicable, Diffusers provides default values for each parameter such as the training batch size and learning rate, but feel free to change these values in the training command if
32_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#script-parameters
.md
parameter such as the training batch size and learning rate, but feel free to change these values in the training command if you'd like.
32_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#script-parameters
.md
For example, to increase the number of gradient accumulation steps above the default value of 1: ```bash accelerate launch textual_inversion.py \ --gradient_accumulation_steps=4 ``` Some other basic and important parameters to specify include: - `--pretrained_model_name_or_path`: the name of the model on the Hub or a local path to the pretrained model - `--train_data_dir`: path to a folder containing the training dataset (example images) - `--output_dir`: where to save the trained model
32_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#script-parameters
.md
- `--output_dir`: where to save the trained model - `--push_to_hub`: whether to push the trained model to the Hub - `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command
32_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#script-parameters
.md
- `--num_vectors`: the number of vectors to learn the embeddings with; increasing this parameter helps the model learn better but it comes with increased training costs - `--placeholder_token`: the special word to tie the learned embeddings to (you must use the word in your prompt for inference) - `--initializer_token`: a single-word that roughly describes the object or style you're trying to train on
32_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#script-parameters
.md
- `--initializer_token`: a single-word that roughly describes the object or style you're trying to train on - `--learnable_property`: whether you're training the model to learn a new "style" (for example, Van Gogh's painting style) or "object" (for example, your dog)
32_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#training-script
.md
Unlike some of the other training scripts, textual_inversion.py has a custom dataset class, [`TextualInversionDataset`](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L487) for creating a dataset. You can customize the image size, placeholder token, interpolation method, whether to crop the image, and more. If you need to change how the dataset is created, you can modify `TextualInversionDataset`.
32_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#training-script
.md
Next, you'll find the dataset preprocessing code and training loop in the [`main()`](https://github.com/huggingface/diffusers/blob/839c2a5ece0af4e75530cb520d77bc7ed8acf474/examples/textual_inversion/textual_inversion.py#L573) function.
32_3_1