source
stringclasses 273
values | url
stringlengths 47
172
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#script-parameters | .md | The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/wuerstchen/text_to_image/train_text_to_image_prior.py#L192) function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like. | 26_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#script-parameters | .md | For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command:
```bash
accelerate launch train_text_to_image_prior.py \
--mixed_precision="fp16"
```
Most of the parameters are identical to the parameters in the [Text-to-image](text2image#script-parameters) training guide, so let's dive right into the Wuerstchen training script! | 26_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#training-script | .md | The training script is also similar to the [Text-to-image](text2image#training-script) training guide, but it's been modified to support Wuerstchen. This guide focuses on the code that is unique to the Wuerstchen training script. | 26_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#training-script | .md | The [`main()`](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/wuerstchen/text_to_image/train_text_to_image_prior.py#L441) function starts by initializing the image encoder - an [EfficientNet](https://github.com/huggingface/diffusers/blob/main/examples/wuerstchen/text_to_image/modeling_efficient_net_encoder.py) - in addition to the usual scheduler and tokenizer.
```py
with ContextManagers(deepspeed_zero_init_disabled_context_manager()): | 26_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#training-script | .md | ```py
with ContextManagers(deepspeed_zero_init_disabled_context_manager()):
pretrained_checkpoint_file = hf_hub_download("dome272/wuerstchen", filename="model_v2_stage_b.pt")
state_dict = torch.load(pretrained_checkpoint_file, map_location="cpu")
image_encoder = EfficientNetEncoder()
image_encoder.load_state_dict(state_dict["effnet_state_dict"])
image_encoder.eval()
```
You'll also load the [`WuerstchenPrior`] model for optimization.
```py | 26_3_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#training-script | .md | image_encoder.eval()
```
You'll also load the [`WuerstchenPrior`] model for optimization.
```py
prior = WuerstchenPrior.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") | 26_3_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#training-script | .md | optimizer = optimizer_cls(
prior.parameters(),
lr=args.learning_rate,
betas=(args.adam_beta1, args.adam_beta2),
weight_decay=args.adam_weight_decay,
eps=args.adam_epsilon,
)
``` | 26_3_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#training-script | .md | betas=(args.adam_beta1, args.adam_beta2),
weight_decay=args.adam_weight_decay,
eps=args.adam_epsilon,
)
```
Next, you'll apply some [transforms](https://github.com/huggingface/diffusers/blob/65ef7a0c5c594b4f84092e328fbdd73183613b30/examples/wuerstchen/text_to_image/train_text_to_image_prior.py#L656) to the images and [tokenize](https://github.com/huggingface/diffusers/blob/65ef7a0c5c594b4f84092e328fbdd73183613b30/examples/wuerstchen/text_to_image/train_text_to_image_prior.py#L637) the captions:
```py | 26_3_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#training-script | .md | ```py
def preprocess_train(examples):
images = [image.convert("RGB") for image in examples[image_column]]
examples["effnet_pixel_values"] = [effnet_transforms(image) for image in images]
examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples)
return examples
``` | 26_3_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#training-script | .md | examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples)
return examples
```
Finally, the [training loop](https://github.com/huggingface/diffusers/blob/65ef7a0c5c594b4f84092e328fbdd73183613b30/examples/wuerstchen/text_to_image/train_text_to_image_prior.py#L656) handles compressing the images to latent space with the `EfficientNetEncoder`, adding noise to the latents, and predicting the noise residual with the [`WuerstchenPrior`] model.
```py | 26_3_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#training-script | .md | ```py
pred_noise = prior(noisy_latents, timesteps, prompt_embeds)
```
If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. | 26_3_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#launch-the-script | .md | Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀
Set the `DATASET_NAME` environment variable to the dataset name from the Hub. This guide uses the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset, but you can create and train on your own datasets as well (see the [Create a dataset for training](create_dataset) guide).
<Tip> | 26_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#launch-the-script | .md | <Tip>
To monitor training progress with Weights & Biases, add the `--report_to=wandb` parameter to the training command. You’ll also need to add the `--validation_prompt` to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results.
</Tip>
```bash
export DATASET_NAME="lambdalabs/naruto-blip-captions" | 26_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#launch-the-script | .md | accelerate launch train_text_to_image_prior.py \
--mixed_precision="fp16" \
--dataset_name=$DATASET_NAME \
--resolution=768 \
--train_batch_size=4 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--dataloader_num_workers=4 \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--checkpoints_total_limit=3 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--validation_prompts="A robot naruto, 4k photo" \
--report_to="wandb" \
--push_to_hub \ | 26_4_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#launch-the-script | .md | --lr_warmup_steps=0 \
--validation_prompts="A robot naruto, 4k photo" \
--report_to="wandb" \
--push_to_hub \
--output_dir="wuerstchen-prior-naruto-model"
```
Once training is complete, you can use your newly trained model for inference!
```py
import torch
from diffusers import AutoPipelineForText2Image
from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS | 26_4_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#launch-the-script | .md | pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16).to("cuda")
caption = "A cute bird naruto holding a shield"
images = pipeline(
caption,
width=1024,
height=1536,
prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS,
prior_guidance_scale=4.0,
num_images_per_prompt=2,
).images
``` | 26_4_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md | https://huggingface.co/docs/diffusers/en/training/wuerstchen/#next-steps | .md | Congratulations on training a Wuerstchen model! To learn more about how to use your new model, the following may be helpful:
- Take a look at the [Wuerstchen](../api/pipelines/wuerstchen#text-to-image-generation) API documentation to learn more about how to use the pipeline for text-to-image generation and its limitations. | 26_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#create-a-dataset-for-training | .md | There are many datasets on the [Hub](https://huggingface.co/datasets?task_categories=task_categories:text-to-image&sort=downloads) to train a model on, but if you can't find one you're interested in or want to use your own, you can create a dataset with the 🤗 [Datasets](https://huggingface.co/docs/datasets) library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another | 27_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#create-a-dataset-for-training | .md | your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation. | 27_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#create-a-dataset-for-training | .md | This guide will show you two ways to create a dataset to finetune on:
- provide a folder of images to the `--train_data_dir` argument
- upload a dataset to the Hub and pass the dataset repository id to the `--dataset_name` argument
<Tip>
💡 Learn more about how to create an image dataset for training in the [Create an image dataset](https://huggingface.co/docs/datasets/image_dataset) guide.
</Tip> | 27_0_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#provide-a-dataset-as-a-folder | .md | For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the [`ImageFolder`](https://huggingface.co/docs/datasets/en/image_dataset#imagefolder) builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like:
```bash
data_dir/xxx.png
data_dir/xxy.png
data_dir/[...]/xxz.png
```
Pass the path to the dataset directory to the `--train_data_dir` argument, and then you can start training:
```bash | 27_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#provide-a-dataset-as-a-folder | .md | ```
Pass the path to the dataset directory to the `--train_data_dir` argument, and then you can start training:
```bash
accelerate launch train_unconditional.py \
--train_data_dir <path-to-train-directory> \
<other-arguments>
``` | 27_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#upload-your-data-to-the-hub | .md | <Tip>
💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the [Image search with 🤗 Datasets](https://huggingface.co/blog/image-search-datasets) post.
</Tip>
Start by creating a dataset with the [`ImageFolder`](https://huggingface.co/docs/datasets/image_load#imagefolder) feature, which creates an `image` column containing the PIL-encoded images. | 27_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#upload-your-data-to-the-hub | .md | You can use the `data_dir` or `data_files` parameters to specify the location of the dataset. The `data_files` parameter supports mapping specific files to dataset splits like `train` or `test`:
```python
from datasets import load_dataset | 27_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#upload-your-data-to-the-hub | .md | # example 1: local folder
dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")
# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd)
dataset = load_dataset("imagefolder", data_files="path_to_zip_file")
# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)
dataset = load_dataset(
"imagefolder",
data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip",
) | 27_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#upload-your-data-to-the-hub | .md | # example 4: providing several splits
dataset = load_dataset(
"imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]}
)
```
Then use the [`~datasets.Dataset.push_to_hub`] method to upload the dataset to the Hub:
```python
# assuming you have ran the huggingface-cli login command in a terminal
dataset.push_to_hub("name_of_your_dataset") | 27_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#upload-your-data-to-the-hub | .md | # if you want to push to a private repo, simply pass private=True:
dataset.push_to_hub("name_of_your_dataset", private=True)
```
Now the dataset is available for training by passing the dataset name to the `--dataset_name` argument:
```bash
accelerate launch --mixed_precision="fp16" train_text_to_image.py \
--pretrained_model_name_or_path="stable-diffusion-v1-5/stable-diffusion-v1-5" \
--dataset_name="name_of_your_dataset" \
<other-arguments>
``` | 27_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/create_dataset.md | https://huggingface.co/docs/diffusers/en/training/create_dataset/#next-steps | .md | Now that you've created a dataset, you can plug it into the `train_data_dir` (if your dataset is local) or `dataset_name` (if your dataset is on the Hub) arguments of a training script.
For your next steps, feel free to try and use your dataset to train a model for [unconditional generation](unconditional_training) or [text-to-image generation](text2image)! | 27_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 28_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 28_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#stable-diffusion-xl | .md | <Tip warning={true}>
This script is experimental, and it's easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset.
</Tip>
[Stable Diffusion XL (SDXL)](https://hf.co/papers/2307.01952) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images. | 28_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#stable-diffusion-xl | .md | SDXL's UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling `gradient_checkpointing`, `mixed_precision`, and `gradient_accumulation_steps`. You can reduce your memory-usage even more by enabling memory-efficient attention with [xFormers](../optimization/xformers) | 28_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#stable-diffusion-xl | .md | You can reduce your memory-usage even more by enabling memory-efficient attention with [xFormers](../optimization/xformers) and using [bitsandbytes'](https://github.com/TimDettmers/bitsandbytes) 8-bit optimizer. | 28_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#stable-diffusion-xl | .md | This guide will explore the [train_text_to_image_sdxl.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) training script to help you become more familiar with it, and how you can adapt it for your own use-case.
Before running the script, make sure you install the library from source:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
``` | 28_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#stable-diffusion-xl | .md | ```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```
Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:
```bash
cd examples/text_to_image
pip install -r requirements_sdxl.txt
```
<Tip> | 28_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#stable-diffusion-xl | .md | ```bash
cd examples/text_to_image
pip install -r requirements_sdxl.txt
```
<Tip>
🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.
</Tip>
Initialize an 🤗 Accelerate environment:
```bash
accelerate config
``` | 28_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#stable-diffusion-xl | .md | </Tip>
Initialize an 🤗 Accelerate environment:
```bash
accelerate config
```
To setup a default 🤗 Accelerate environment without choosing any configurations:
```bash
accelerate config default
```
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
```py
from accelerate.utils import write_basic_config | 28_1_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#stable-diffusion-xl | .md | write_basic_config()
```
Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. | 28_1_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#script-parameters | .md | <Tip>
The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) and let us know if you have any questions or concerns.
</Tip> | 28_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#script-parameters | .md | The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L129) function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like. | 28_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#script-parameters | .md | For example, to speedup training with mixed precision using the bf16 format, add the `--mixed_precision` parameter to the training command:
```bash
accelerate launch train_text_to_image_sdxl.py \
--mixed_precision="bf16"
```
Most of the parameters are identical to the parameters in the [Text-to-image](text2image#script-parameters) training guide, so you'll focus on the parameters that are relevant to training SDXL in this guide. | 28_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#script-parameters | .md | - `--pretrained_vae_model_name_or_path`: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better [VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)
- `--proportion_empty_prompts`: the proportion of image prompts to replace with empty strings
- `--timestep_bias_strategy`: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details | 28_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#script-parameters | .md | - `--timestep_bias_multiplier`: the weight of the bias to apply to the timestep
- `--timestep_bias_begin`: the timestep to begin applying the bias
- `--timestep_bias_end`: the timestep to end applying the bias
- `--timestep_bias_portion`: the proportion of timesteps to apply the bias to | 28_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#min-snr-weighting | .md | The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script.
Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:
```bash | 28_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#min-snr-weighting | .md | Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:
```bash
accelerate launch train_text_to_image_sdxl.py \
--snr_gamma=5.0
``` | 28_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | The training script is also similar to the [Text-to-image](text2image#training-script) training guide, but it's been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script. | 28_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | It starts by creating functions to [tokenize the prompts](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L478) to calculate the prompt embeddings, and to compute the image embeddings with the [VAE](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L519). Next, you'll a function to [generate the timesteps | 28_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | Next, you'll a function to [generate the timesteps weights](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L531) depending on the number of timesteps and the timestep bias strategy to apply. | 28_4_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | Within the [`main()`](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L572) function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each:
```py
tokenizer_one = AutoTokenizer.from_pretrained(
args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False
) | 28_4_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False
)
tokenizer_two = AutoTokenizer.from_pretrained(
args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False
) | 28_4_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | text_encoder_cls_one = import_model_class_from_model_name_or_path(
args.pretrained_model_name_or_path, args.revision
)
text_encoder_cls_two = import_model_class_from_model_name_or_path(
args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2"
)
``` | 28_4_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | The [prompt and image embeddings](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L857) are computed first and kept in memory, which isn't typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this | 28_4_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this [PR](https://github.com/huggingface/diffusers/pull/4505) for more discussion about this topic). | 28_4_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | ```py
text_encoders = [text_encoder_one, text_encoder_two]
tokenizers = [tokenizer_one, tokenizer_two]
compute_embeddings_fn = functools.partial(
encode_prompt,
text_encoders=text_encoders,
tokenizers=tokenizers,
proportion_empty_prompts=args.proportion_empty_prompts,
caption_column=args.caption_column,
) | 28_4_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
train_dataset = train_dataset.map(
compute_vae_encodings_fn,
batched=True,
batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps,
new_fingerprint=new_fingerprint_for_vae,
)
```
After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory:
```py
del text_encoders, tokenizers, vae
gc.collect() | 28_4_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | ```py
del text_encoders, tokenizers, vae
gc.collect()
torch.cuda.empty_cache()
```
Finally, the [training loop](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L943) takes care of the rest. If you chose to apply a timestep bias strategy, you'll see the timestep weights are calculated and added as noise:
```py
weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to( | 28_4_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | ```py
weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to(
model_input.device
)
timesteps = torch.multinomial(weights, bsz, replacement=True).long() | 28_4_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#training-script | .md | noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps)
```
If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. | 28_4_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 | 28_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | Let’s train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Naruto characters. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with `VAE_NAME` to avoid numerical instabilities.
<Tip> | 28_5_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | <Tip>
To monitor training progress with Weights & Biases, add the `--report_to=wandb` parameter to the training command. You’ll also need to add the `--validation_prompt` and `--validation_epochs` to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results.
</Tip>
```bash
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
export VAE_NAME="madebyollin/sdxl-vae-fp16-fix" | 28_5_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | </Tip>
```bash
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
export VAE_NAME="madebyollin/sdxl-vae-fp16-fix"
export DATASET_NAME="lambdalabs/naruto-blip-captions" | 28_5_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | accelerate launch train_text_to_image_sdxl.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--pretrained_vae_model_name_or_path=$VAE_NAME \
--dataset_name=$DATASET_NAME \
--enable_xformers_memory_efficient_attention \
--resolution=512 \
--center_crop \
--random_flip \
--proportion_empty_prompts=0.2 \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--max_train_steps=10000 \
--use_8bit_adam \
--learning_rate=1e-06 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \ | 28_5_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | --max_train_steps=10000 \
--use_8bit_adam \
--learning_rate=1e-06 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--mixed_precision="fp16" \
--report_to="wandb" \
--validation_prompt="a cute Sundar Pichai creature" \
--validation_epochs 5 \
--checkpointing_steps=5000 \
--output_dir="sdxl-naruto-model" \
--push_to_hub
```
After you've finished training, you can use your newly trained SDXL model for inference!
<hfoptions id="inference">
<hfoption id="PyTorch">
```py | 28_5_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | <hfoptions id="inference">
<hfoption id="PyTorch">
```py
from diffusers import DiffusionPipeline
import torch | 28_5_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda") | 28_5_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | prompt = "A naruto with green eyes and red legs."
image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0]
image.save("naruto.png")
```
</hfoption>
<hfoption id="PyTorch XLA"> | 28_5_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | image.save("naruto.png")
```
</hfoption>
<hfoption id="PyTorch XLA">
[PyTorch XLA](https://pytorch.org/xla) allows you to run PyTorch on XLA devices such as TPUs, which can be faster. The initial warmup step takes longer because the model needs to be compiled and optimized. However, subsequent calls to the pipeline on an input **with the same length** as the original prompt are much faster because it can reuse the optimized graph.
```py
from diffusers import DiffusionPipeline
import torch | 28_5_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | ```py
from diffusers import DiffusionPipeline
import torch
import torch_xla.core.xla_model as xm | 28_5_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | device = xm.xla_device()
pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0").to(device)
prompt = "A naruto with green eyes and red legs."
start = time()
image = pipeline(prompt, num_inference_steps=inference_steps).images[0]
print(f'Compilation time is {time()-start} sec')
image.save("naruto.png") | 28_5_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#launch-the-script | .md | start = time()
image = pipeline(prompt, num_inference_steps=inference_steps).images[0]
print(f'Inference time is {time()-start} sec after compilation')
```
</hfoption>
</hfoptions> | 28_5_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#next-steps | .md | Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful:
- Read the [Stable Diffusion XL](../using-diffusers/sdxl) guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it's refiner model, and the different types of micro-conditionings. | 28_6_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/sdxl.md | https://huggingface.co/docs/diffusers/en/training/sdxl/#next-steps | .md | - Check out the [DreamBooth](dreambooth) and [LoRA](lora) training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined! | 28_6_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 29_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 29_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#unconditional-image-generation | .md | Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution.
This guide will explore the [train_unconditional.py](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py) training script to help you become familiar with it, and how you can adapt it for your own use-case.
Before running the script, make sure you install the library from source: | 29_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#unconditional-image-generation | .md | Before running the script, make sure you install the library from source:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```
Then navigate to the example folder containing the training script and install the required dependencies:
```bash
cd examples/unconditional_image_generation
pip install -r requirements.txt
```
<Tip> | 29_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#unconditional-image-generation | .md | ```bash
cd examples/unconditional_image_generation
pip install -r requirements.txt
```
<Tip>
🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.
</Tip>
Initialize an 🤗 Accelerate environment:
```bash
accelerate config
``` | 29_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#unconditional-image-generation | .md | </Tip>
Initialize an 🤗 Accelerate environment:
```bash
accelerate config
```
To setup a default 🤗 Accelerate environment without choosing any configurations:
```bash
accelerate config default
```
Or if your environment doesn't support an interactive shell like a notebook, you can use:
```py
from accelerate.utils import write_basic_config | 29_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#unconditional-image-generation | .md | write_basic_config()
```
Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. | 29_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#script-parameters | .md | <Tip>
The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py) and let us know if you have any questions or concerns.
</Tip> | 29_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#script-parameters | .md | The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L55) function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like. | 29_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#script-parameters | .md | For example, to speedup training with mixed precision using the bf16 format, add the `--mixed_precision` parameter to the training command:
```bash
accelerate launch train_unconditional.py \
--mixed_precision="bf16"
```
Some basic and important parameters to specify include:
- `--dataset_name`: the name of the dataset on the Hub or a local path to the dataset to train on
- `--output_dir`: where to save the trained model
- `--push_to_hub`: whether to push the trained model to the Hub | 29_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#script-parameters | .md | - `--output_dir`: where to save the trained model
- `--push_to_hub`: whether to push the trained model to the Hub
- `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command
Bring your dataset, and let the training script handle everything else! | 29_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | The code for preprocessing the dataset and the training loop is found in the [`main()`](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L275) function. If you need to adapt the training script, this is where you'll need to make your changes. | 29_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | The `train_unconditional` script [initializes a `UNet2DModel`](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L356) if you don't provide a model configuration. You can configure the UNet here if you'd like:
```py
model = UNet2DModel(
sample_size=args.resolution,
in_channels=3,
out_channels=3,
layers_per_block=2,
block_out_channels=(128, 128, 256, 256, 512, 512),
down_block_types=(
"DownBlock2D", | 29_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | out_channels=3,
layers_per_block=2,
block_out_channels=(128, 128, 256, 256, 512, 512),
down_block_types=(
"DownBlock2D",
"DownBlock2D",
"DownBlock2D",
"DownBlock2D",
"AttnDownBlock2D",
"DownBlock2D",
),
up_block_types=(
"UpBlock2D",
"AttnUpBlock2D",
"UpBlock2D",
"UpBlock2D",
"UpBlock2D",
"UpBlock2D",
),
)
``` | 29_3_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | "DownBlock2D",
),
up_block_types=(
"UpBlock2D",
"AttnUpBlock2D",
"UpBlock2D",
"UpBlock2D",
"UpBlock2D",
"UpBlock2D",
),
)
```
Next, the script initializes a [scheduler](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L418) and [optimizer](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L429):
```py | 29_3_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | ```py
# Initialize the scheduler
accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys())
if accepts_prediction_type:
noise_scheduler = DDPMScheduler(
num_train_timesteps=args.ddpm_num_steps,
beta_schedule=args.ddpm_beta_schedule,
prediction_type=args.prediction_type,
)
else:
noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) | 29_3_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | # Initialize the optimizer
optimizer = torch.optim.AdamW(
model.parameters(),
lr=args.learning_rate,
betas=(args.adam_beta1, args.adam_beta2),
weight_decay=args.adam_weight_decay,
eps=args.adam_epsilon,
)
``` | 29_3_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | betas=(args.adam_beta1, args.adam_beta2),
weight_decay=args.adam_weight_decay,
eps=args.adam_epsilon,
)
```
Then it [loads a dataset](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L451) and you can specify how to [preprocess](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L455) it:
```py | 29_3_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | ```py
dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") | 29_3_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | augmentations = transforms.Compose(
[
transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5]),
]
)
``` | 29_3_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | Finally, the [training loop](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L540) handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and | 29_3_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#training-script | .md | to the Hub. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. | 29_3_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#launch-the-script | .md | Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀
<Tip warning={true}>
A full training run takes 2 hours on 4xV100 GPUs.
</Tip>
<hfoptions id="launchtraining">
<hfoption id="single GPU">
```bash
accelerate launch train_unconditional.py \
--dataset_name="huggan/flowers-102-categories" \
--output_dir="ddpm-ema-flowers-64" \
--mixed_precision="fp16" \
--push_to_hub
```
</hfoption>
<hfoption id="multi-GPU"> | 29_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#launch-the-script | .md | --output_dir="ddpm-ema-flowers-64" \
--mixed_precision="fp16" \
--push_to_hub
```
</hfoption>
<hfoption id="multi-GPU">
If you're training with more than one GPU, add the `--multi_gpu` parameter to the training command:
```bash
accelerate launch --multi_gpu train_unconditional.py \
--dataset_name="huggan/flowers-102-categories" \
--output_dir="ddpm-ema-flowers-64" \
--mixed_precision="fp16" \
--push_to_hub
```
</hfoption>
</hfoptions> | 29_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#launch-the-script | .md | --output_dir="ddpm-ema-flowers-64" \
--mixed_precision="fp16" \
--push_to_hub
```
</hfoption>
</hfoptions>
The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference:
```py
from diffusers import DiffusionPipeline
import torch | 29_4_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/unconditional_training.md | https://huggingface.co/docs/diffusers/en/training/unconditional_training/#launch-the-script | .md | pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda")
image = pipeline().images[0]
``` | 29_4_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lora.md | https://huggingface.co/docs/diffusers/en/training/lora/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 30_0_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.