source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#training-script
.md
The script starts by loading the [tokenizer](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L616), [scheduler and model](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L622): ```py # Load tokenizer if args.tokenizer_name: tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) elif args.pretrained_model_name_or_path:
32_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#training-script
.md
if args.tokenizer_name: tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) elif args.pretrained_model_name_or_path: tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
32_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#training-script
.md
# Load scheduler and models noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") text_encoder = CLIPTextModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision ) vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) unet = UNet2DConditionModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision ) ```
32_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#training-script
.md
args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision ) ``` The special [placeholder token](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L632) is added next to the tokenizer, and the embedding is readjusted to account for the new token.
32_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#training-script
.md
Then, the script [creates a dataset](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L716) from the `TextualInversionDataset`: ```py train_dataset = TextualInversionDataset( data_root=args.train_data_dir, tokenizer=tokenizer, size=args.resolution, placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), repeats=args.repeats, learnable_property=args.learnable_property,
32_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#training-script
.md
repeats=args.repeats, learnable_property=args.learnable_property, center_crop=args.center_crop, set="train", ) train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers ) ```
32_3_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#training-script
.md
train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers ) ``` Finally, the [training loop](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L784) handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token.
32_3_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#training-script
.md
If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.
32_3_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! πŸš€ For this guide, you'll download some images of a [cat toy](https://huggingface.co/datasets/diffusers/cat_toy_example) and store them in a directory. But remember, you can create and use your own dataset if you want (see the [Create a dataset for training](create_dataset) guide). ```py from huggingface_hub import snapshot_download
32_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
local_dir = "./cat" snapshot_download( "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" ) ``` Set the environment variable `MODEL_NAME` to a model id on the Hub or a path to a local model, and `DATA_DIR` to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository: - `learned_embeds.bin`: the learned embedding vectors corresponding to your example images
32_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
- `learned_embeds.bin`: the learned embedding vectors corresponding to your example images - `token_identifier.txt`: the special placeholder token - `type_of_concept.txt`: the type of concept you're training on (either "object" or "style") <Tip warning={true}> A full training run takes ~1 hour on a single V100 GPU. </Tip>
32_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
<Tip warning={true}> A full training run takes ~1 hour on a single V100 GPU. </Tip> One more thing before you launch the script. If you're interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: ```bash --validation_prompt="A <cat-toy> train" --num_validation_images=4 --validation_steps=100 ``` <hfoptions id="training-inference"> <hfoption id="PyTorch"> ```bash
32_4_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
--num_validation_images=4 --validation_steps=100 ``` <hfoptions id="training-inference"> <hfoption id="PyTorch"> ```bash export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5" export DATA_DIR="./cat"
32_4_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
accelerate launch textual_inversion.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --train_data_dir=$DATA_DIR \ --learnable_property="object" \ --placeholder_token="<cat-toy>" \ --initializer_token="toy" \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --max_train_steps=3000 \ --learning_rate=5.0e-04 \ --scale_lr \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --output_dir="textual_inversion_cat" \ --push_to_hub ``` </hfoption> <hfoption id="Flax"> ```bash
32_4_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
--lr_warmup_steps=0 \ --output_dir="textual_inversion_cat" \ --push_to_hub ``` </hfoption> <hfoption id="Flax"> ```bash export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" export DATA_DIR="./cat"
32_4_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
python textual_inversion_flax.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --train_data_dir=$DATA_DIR \ --learnable_property="object" \ --placeholder_token="<cat-toy>" \ --initializer_token="toy" \ --resolution=512 \ --train_batch_size=1 \ --max_train_steps=3000 \ --learning_rate=5.0e-04 \ --scale_lr \ --output_dir="textual_inversion_cat" \ --push_to_hub ``` </hfoption> </hfoptions> After training is complete, you can use your newly trained model for inference like:
32_4_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
``` </hfoption> </hfoptions> After training is complete, you can use your newly trained model for inference like: <hfoptions id="training-inference"> <hfoption id="PyTorch"> ```py from diffusers import StableDiffusionPipeline import torch
32_4_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
pipeline = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") pipeline.load_textual_inversion("sd-concepts-library/cat-toy") image = pipeline("A <cat-toy> train", num_inference_steps=50).images[0] image.save("cat-train.png") ``` </hfoption> <hfoption id="Flax">
32_4_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
image.save("cat-train.png") ``` </hfoption> <hfoption id="Flax"> Flax doesn't support the [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] method, but the textual_inversion_flax.py script [saves](https://github.com/huggingface/diffusers/blob/c0f058265161178f2a88849e92b37ffdc81f1dcc/examples/textual_inversion/textual_inversion_flax.py#L636C2-L636C2) the learned embeddings as a part of the model after training. This means you can use the model for inference like any other Flax model:
32_4_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
```py import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline
32_4_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
model_path = "path-to-your-trained-model" pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(model_path, dtype=jax.numpy.bfloat16) prompt = "A <cat-toy> train" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, jax.device_count()) prompt_ids = shard(prompt_ids)
32_4_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#launch-the-script
.md
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) image.save("cat-train.png") ``` </hfoption> </hfoptions>
32_4_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text_inversion.md
https://huggingface.co/docs/diffusers/en/training/text_inversion/#next-steps
.md
Congratulations on training your own Textual Inversion model! πŸŽ‰ To learn more about how to use your new model, the following guides may be helpful: - Learn how to [load Textual Inversion embeddings](../using-diffusers/loading_adapters) and also use them as negative embeddings. - Learn how to use [Textual Inversion](textual_inversion_inference) for inference with Stable Diffusion 1/2 and Stable Diffusion XL.
32_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/ddpo.md
https://huggingface.co/docs/diffusers/en/training/ddpo/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
33_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/ddpo.md
https://huggingface.co/docs/diffusers/en/training/ddpo/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
33_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/ddpo.md
https://huggingface.co/docs/diffusers/en/training/ddpo/#reinforcement-learning-training-with-ddpo
.md
You can fine-tune Stable Diffusion on a reward function via reinforcement learning with the πŸ€— TRL library and πŸ€— Diffusers. This is done with the Denoising Diffusion Policy Optimization (DDPO) algorithm introduced by Black et al. in [Training Diffusion Models with Reinforcement Learning](https://arxiv.org/abs/2305.13301), which is implemented in πŸ€— TRL with the [`~trl.DDPOTrainer`].
33_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/ddpo.md
https://huggingface.co/docs/diffusers/en/training/ddpo/#reinforcement-learning-training-with-ddpo
.md
For more information, check out the [`~trl.DDPOTrainer`] API reference and the [Finetune Stable Diffusion Models with DDPO via TRL](https://huggingface.co/blog/trl-ddpo) blog post.
33_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
34_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
34_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#t2i-adapter
.md
[T2I-Adapter](https://hf.co/papers/2302.08453) is a lightweight adapter model that provides an additional conditioning input image (line art, canny, sketch, depth, pose) to better control image generation. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model.
34_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#t2i-adapter
.md
The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model. This guide will explore the [train_t2i_adapter_sdxl.py](https://github.com/huggingface/diffusers/blob/main/examples/t2i_adapter/train_t2i_adapter_sdxl.py) training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install .
34_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#t2i-adapter
.md
```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then navigate to the example folder containing the training script and install the required dependencies for the script you're using: ```bash cd examples/t2i_adapter pip install -r requirements.txt ``` <Tip>
34_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#t2i-adapter
.md
```bash cd examples/t2i_adapter pip install -r requirements.txt ``` <Tip> πŸ€— Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the πŸ€— Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more. </Tip> Initialize an πŸ€— Accelerate environment: ```bash accelerate config ```
34_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#t2i-adapter
.md
</Tip> Initialize an πŸ€— Accelerate environment: ```bash accelerate config ``` To setup a default πŸ€— Accelerate environment without choosing any configurations: ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell, like a notebook, you can use: ```py from accelerate.utils import write_basic_config
34_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#t2i-adapter
.md
write_basic_config() ``` Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. <Tip>
34_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#t2i-adapter
.md
<Tip> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/t2i_adapter/train_t2i_adapter_sdxl.py) and let us know if you have any questions or concerns. </Tip>
34_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#script-parameters
.md
The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L233) function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.
34_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#script-parameters
.md
For example, to activate gradient accumulation, add the `--gradient_accumulation_steps` parameter to the training command: ```bash accelerate launch train_t2i_adapter_sdxl.py \ ----gradient_accumulation_steps=4 ``` Many of the basic and important parameters are described in the [Text-to-image](text2image#script-parameters) training guide, so this guide just focuses on the relevant T2I-Adapter parameters:
34_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#script-parameters
.md
- `--pretrained_vae_model_name_or_path`: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better [VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix) - `--crops_coords_top_left_h` and `--crops_coords_top_left_w`: height and width coordinates to include in SDXL's crop coordinate embeddings - `--conditioning_image_column`: the column of the conditioning images in the dataset
34_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#script-parameters
.md
- `--conditioning_image_column`: the column of the conditioning images in the dataset - `--proportion_empty_prompts`: the proportion of image prompts to replace with empty strings
34_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#training-script
.md
As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script.
34_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#training-script
.md
The training script begins by preparing the dataset. This incudes [tokenizing](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L674) the prompt and [applying transforms](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L714) to the images and conditioning images. ```py conditioning_image_transforms = transforms.Compose( [
34_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#training-script
.md
```py conditioning_image_transforms = transforms.Compose( [ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), transforms.CenterCrop(args.resolution), transforms.ToTensor(), ] ) ``` Within the [`main()`](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L770) function, the T2I-Adapter is either loaded from a pretrained adapter or it is randomly initialized: ```py
34_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#training-script
.md
```py if args.adapter_model_name_or_path: logger.info("Loading existing adapter weights.") t2iadapter = T2IAdapter.from_pretrained(args.adapter_model_name_or_path) else: logger.info("Initializing t2iadapter weights.") t2iadapter = T2IAdapter( in_channels=3, channels=(320, 640, 1280, 1280), num_res_blocks=2, downscale_factor=16, adapter_type="full_adapter_xl", ) ```
34_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#training-script
.md
in_channels=3, channels=(320, 640, 1280, 1280), num_res_blocks=2, downscale_factor=16, adapter_type="full_adapter_xl", ) ``` The [optimizer](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L952) is initialized for the T2I-Adapter parameters: ```py params_to_optimize = t2iadapter.parameters() optimizer = optimizer_class( params_to_optimize, lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2),
34_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#training-script
.md
optimizer = optimizer_class( params_to_optimize, lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) ``` Lastly, in the [training loop](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L1086), the adapter conditioning image and the text embeddings are passed to the UNet to predict the noise residual: ```py
34_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#training-script
.md
```py t2iadapter_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) down_block_additional_residuals = t2iadapter(t2iadapter_image) down_block_additional_residuals = [ sample.to(dtype=weight_dtype) for sample in down_block_additional_residuals ]
34_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#training-script
.md
model_pred = unet( inp_noisy_latents, timesteps, encoder_hidden_states=batch["prompt_ids"], added_cond_kwargs=batch["unet_added_conditions"], down_block_additional_residuals=down_block_additional_residuals, ).sample ``` If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.
34_3_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#launch-the-script
.md
Now you’re ready to launch the training script! πŸš€ For this example training, you'll use the [fusing/fill50k](https://huggingface.co/datasets/fusing/fill50k) dataset. You can also create and use your own dataset if you want (see the [Create a dataset for training](https://moon-ci-docs.huggingface.co/docs/diffusers/pr_5512/en/training/create_dataset) guide). Set the environment variable `MODEL_DIR` to a model id on the Hub or a path to a local model and `OUTPUT_DIR` to where you want to save the model.
34_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#launch-the-script
.md
Download the following images to condition your training with: ```bash wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png ``` <Tip>
34_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#launch-the-script
.md
``` <Tip> To monitor training progress with Weights & Biases, add the `--report_to=wandb` parameter to the training command. You'll also need to add the `--validation_image`, `--validation_prompt`, and `--validation_steps` to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. </Tip> ```bash export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0" export OUTPUT_DIR="path to save model"
34_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#launch-the-script
.md
accelerate launch train_t2i_adapter_sdxl.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --mixed_precision="fp16" \ --resolution=1024 \ --learning_rate=1e-5 \ --max_train_steps=15000 \ --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --validation_steps=100 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \
34_4_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#launch-the-script
.md
--validation_steps=100 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --report_to="wandb" \ --seed=42 \ --push_to_hub ``` Once training is complete, you can use your T2I-Adapter for inference: ```py from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest from diffusers.utils import load_image import torch
34_4_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#launch-the-script
.md
adapter = T2IAdapter.from_pretrained("path/to/adapter", torch_dtype=torch.float16) pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", adapter=adapter, torch_dtype=torch.float16 ) pipeline.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config) pipeline.enable_xformers_memory_efficient_attention() pipeline.enable_model_cpu_offload()
34_4_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#launch-the-script
.md
control_image = load_image("./conditioning_image_1.png") prompt = "pale golden rod circle with old lace background" generator = torch.manual_seed(0) image = pipeline( prompt, image=control_image, generator=generator ).images[0] image.save("./output.png") ```
34_4_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/t2i_adapters.md
https://huggingface.co/docs/diffusers/en/training/t2i_adapters/#next-steps
.md
Congratulations on training a T2I-Adapter model! πŸŽ‰ To learn more: - Read the [Efficient Controllable Generation for SDXL with T2I-Adapters](https://huggingface.co/blog/t2i-sdxl-adapters) blog post to learn more details about the experimental results from the T2I-Adapter team.
34_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
35_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
35_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#latent-consistency-distillation
.md
[Latent Consistency Models (LCMs)](https://hf.co/papers/2310.04378) are able to generate high-quality images in just a few steps, representing a big leap forward because many pipelines require at least 25+ steps. LCMs are produced by applying the latent consistency distillation method to any Stable Diffusion model. This method works by applying *one-stage guided distillation* to the latent space, and incorporating a *skipping-step* method to consistently skip timesteps to accelerate the distillation process
35_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#latent-consistency-distillation
.md
latent space, and incorporating a *skipping-step* method to consistently skip timesteps to accelerate the distillation process (refer to section 4.1, 4.2, and 4.3 of the paper for more details).
35_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#latent-consistency-distillation
.md
If you're training on a GPU with limited vRAM, try enabling `gradient_checkpointing`, `gradient_accumulation_steps`, and `mixed_precision` to reduce memory-usage and speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with [xFormers](../optimization/xformers) and [bitsandbytes'](https://github.com/TimDettmers/bitsandbytes) 8-bit optimizer.
35_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#latent-consistency-distillation
.md
This guide will explore the [train_lcm_distill_sd_wds.py](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/train_lcm_distill_sd_wds.py) script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ```
35_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#latent-consistency-distillation
.md
```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then navigate to the example folder containing the training script and install the required dependencies for the script you're using: ```bash cd examples/consistency_distillation pip install -r requirements.txt ``` <Tip>
35_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#latent-consistency-distillation
.md
```bash cd examples/consistency_distillation pip install -r requirements.txt ``` <Tip> πŸ€— Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the πŸ€— Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more. </Tip> Initialize an πŸ€— Accelerate environment (try enabling `torch.compile` to significantly speedup training): ```bash
35_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#latent-consistency-distillation
.md
</Tip> Initialize an πŸ€— Accelerate environment (try enabling `torch.compile` to significantly speedup training): ```bash accelerate config ``` To setup a default πŸ€— Accelerate environment without choosing any configurations: ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell, like a notebook, you can use: ```py from accelerate.utils import write_basic_config
35_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#latent-consistency-distillation
.md
write_basic_config() ``` Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.
35_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#script-parameters
.md
<Tip> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/train_lcm_distill_sd_wds.py) and let us know if you have any questions or concerns. </Tip>
35_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#script-parameters
.md
The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L419) function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd
35_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#script-parameters
.md
such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.
35_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#script-parameters
.md
For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command: ```bash accelerate launch train_lcm_distill_sd_wds.py \ --mixed_precision="fp16" ``` Most of the parameters are identical to the parameters in the [Text-to-image](text2image#script-parameters) training guide, so you'll focus on the parameters that are relevant to latent consistency distillation in this guide.
35_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#script-parameters
.md
- `--pretrained_teacher_model`: the path to a pretrained latent diffusion model to use as the teacher model - `--pretrained_vae_model_name_or_path`: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify an alternative VAE (like this [VAE]((https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)) by madebyollin which works in fp16) - `--w_min` and `--w_max`: the minimum and maximum guidance scale values for guidance scale sampling
35_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#script-parameters
.md
- `--w_min` and `--w_max`: the minimum and maximum guidance scale values for guidance scale sampling - `--num_ddim_timesteps`: the number of timesteps for DDIM sampling - `--loss_type`: the type of loss (L2 or Huber) to calculate for latent consistency distillation; Huber loss is generally preferred because it's more robust to outliers - `--huber_c`: the Huber loss parameter
35_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
The training script starts by creating a dataset class - [`Text2ImageDataset`](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L141) - for preprocessing the images and creating a training dataset. ```py def transform(example): image = example["image"] image = TF.resize(image, resolution, interpolation=transforms.InterpolationMode.BILINEAR)
35_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
c_top, c_left, _, _ = transforms.RandomCrop.get_params(image, output_size=(resolution, resolution)) image = TF.crop(image, c_top, c_left, resolution, resolution) image = TF.to_tensor(image) image = TF.normalize(image, [0.5], [0.5])
35_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
example["image"] = image return example ``` For improved performance on reading and writing large datasets stored in the cloud, this script uses the [WebDataset](https://github.com/webdataset/webdataset) format to create a preprocessing pipeline to apply transforms and create a dataset and dataloader for training. Images are processed and fed to the training loop without having to download the full dataset first. ```py processing_pipeline = [ wds.decode("pil", handler=wds.ignore_and_continue),
35_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
```py processing_pipeline = [ wds.decode("pil", handler=wds.ignore_and_continue), wds.rename(image="jpg;png;jpeg;webp", text="text;txt;caption", handler=wds.warn_and_continue), wds.map(filter_keys({"image", "text"})), wds.map(transform), wds.to_tuple("image", "text"), ] ```
35_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
wds.map(transform), wds.to_tuple("image", "text"), ] ``` In the [`main()`](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L768) function, all the necessary components like the noise scheduler, tokenizers, text encoders, and VAE are loaded. The teacher UNet is also loaded here and then you can create a student UNet from the teacher UNet. The student UNet is updated by the optimizer during training.
35_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
```py teacher_unet = UNet2DConditionModel.from_pretrained( args.pretrained_teacher_model, subfolder="unet", revision=args.teacher_revision )
35_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
unet = UNet2DConditionModel(**teacher_unet.config) unet.load_state_dict(teacher_unet.state_dict(), strict=False) unet.train() ``` Now you can create the [optimizer](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L979) to update the UNet parameters: ```py optimizer = optimizer_class( unet.parameters(), lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay,
35_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
unet.parameters(), lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) ``` Create the [dataset](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L994): ```py dataset = Text2ImageDataset( train_shards_path_or_url=args.train_shards_path_or_url, num_train_examples=args.max_train_samples, per_gpu_batch_size=args.train_batch_size,
35_3_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
num_train_examples=args.max_train_samples, per_gpu_batch_size=args.train_batch_size, global_batch_size=args.train_batch_size * accelerator.num_processes, num_workers=args.dataloader_num_workers, resolution=args.resolution, shuffle_buffer_size=1000, pin_memory=True, persistent_workers=True, ) train_dataloader = dataset.train_dataloader ```
35_3_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
``` Next, you're ready to setup the [training loop](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L1049) and implement the latent consistency distillation method (see Algorithm 1 in the paper for more details). This section of the script takes care of adding noise to the latents, sampling and creating a guidance scale embedding, and predicting the original image from the noise. ```py
35_3_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
```py pred_x_0 = predicted_origin( noise_pred, start_timesteps, noisy_model_input, noise_scheduler.config.prediction_type, alpha_schedule, sigma_schedule, )
35_3_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
model_pred = c_skip_start * noisy_model_input + c_out_start * pred_x_0 ```
35_3_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
``` It gets the [teacher model predictions](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L1172) and the [LCM predictions](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L1209) next, calculates the loss, and then backpropagates it to the LCM. ```py if args.loss_type == "l2":
35_3_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#training-script
.md
```py if args.loss_type == "l2": loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") elif args.loss_type == "huber": loss = torch.mean( torch.sqrt((model_pred.float() - target.float()) ** 2 + args.huber_c**2) - args.huber_c ) ``` If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers tutorial](../using-diffusers/write_own_pipeline) which breaks down the basic pattern of the denoising process.
35_3_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#launch-the-script
.md
Now you're ready to launch the training script and start distilling! For this guide, you'll use the `--train_shards_path_or_url` to specify the path to the [Conceptual Captions 12M](https://github.com/google-research-datasets/conceptual-12m) dataset stored on the Hub [here](https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset). Set the `MODEL_DIR` environment variable to the name of the teacher model and `OUTPUT_DIR` to where you want to save the model. ```bash
35_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#launch-the-script
.md
```bash export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="path/to/saved/model"
35_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#launch-the-script
.md
accelerate launch train_lcm_distill_sd_wds.py \ --pretrained_teacher_model=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --mixed_precision=fp16 \ --resolution=512 \ --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \ --max_train_steps=1000 \ --max_train_samples=4000000 \ --dataloader_num_workers=8 \ --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \
35_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#launch-the-script
.md
--validation_steps=200 \ --checkpointing_steps=200 --checkpoints_total_limit=10 \ --train_batch_size=12 \ --gradient_checkpointing --enable_xformers_memory_efficient_attention \ --gradient_accumulation_steps=1 \ --use_8bit_adam \ --resume_from_checkpoint=latest \ --report_to=wandb \ --seed=453645634 \ --push_to_hub ``` Once training is complete, you can use your new LCM for inference. ```py from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler import torch
35_4_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#launch-the-script
.md
unet = UNet2DConditionModel.from_pretrained("your-username/your-model", torch_dtype=torch.float16, variant="fp16") pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", unet=unet, torch_dtype=torch.float16, variant="fp16") pipeline.scheduler = LCMScheduler.from_config(pipe.scheduler.config) pipeline.to("cuda") prompt = "sushi rolls in the form of panda heads, sushi platter" image = pipeline(prompt, num_inference_steps=4, guidance_scale=1.0).images[0] ```
35_4_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#lora
.md
LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the [train_lcm_distill_lora_sd_wds.py](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/train_lcm_distill_lora_sd_wds.py) or
35_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#lora
.md
or [train_lcm_distill_lora_sdxl.wds.py](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/train_lcm_distill_lora_sdxl_wds.py) script to train with LoRA.
35_5_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#lora
.md
The LoRA training script is discussed in more detail in the [LoRA training](lora) guide.
35_5_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#stable-diffusion-xl
.md
Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the [train_lcm_distill_sdxl_wds.py](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/train_lcm_distill_sdxl_wds.py) script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the [SDXL training](sdxl) guide.
35_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/lcm_distill.md
https://huggingface.co/docs/diffusers/en/training/lcm_distill/#next-steps
.md
Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful: - Learn how to use [LCMs for inference](../using-diffusers/lcm) for text-to-image, image-to-image, and with LoRA checkpoints. - Read the [SDXL in 4 steps with Latent Consistency LoRAs](https://huggingface.co/blog/lcm_lora) blog post to learn more about SDXL LCM-LoRA's for super fast inference, quality comparisons, benchmarks, and more.
35_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/
.md
<!--Copyright 2024 Custom Diffusion authors The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
36_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/
.md
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
36_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#custom-diffusion
.md
[Custom Diffusion](https://huggingface.co/papers/2212.04488) is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time.
36_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#custom-diffusion
.md
If you're training on a GPU with limited vRAM, you should try enabling xFormers with `--enable_xformers_memory_efficient_attention` for faster training with lower vRAM requirements (16GB). To save even more memory, add `--set_grads_to_none` in the training argument to set the gradients to `None` instead of zero (this option can cause some issues, so if you experience any, try removing this parameter).
36_1_1