source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-weighting
.md
Prompt weighting provides a way to emphasize or de-emphasize certain parts of a prompt, allowing for more control over the generated image. A prompt can include several concepts, which gets turned into contextualized text embeddings. The embeddings are used by the model to condition its cross-attention layers to generate an image (read the Stable Diffusion [blog post](https://huggingface.co/blog/stable_diffusion) to learn more about how it works).
51_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-weighting
.md
Prompt weighting works by increasing or decreasing the scale of the text embedding vector that corresponds to its concept in the prompt because you may not necessarily want the model to focus on all concepts equally. The easiest way to prepare the prompt-weighted embeddings is to use [Compel](https://github.com/damian0815/compel), a text prompt-weighting and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a
51_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-weighting
.md
and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a [`prompt_embeds`](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline.__call__.prompt_embeds) (and optionally [`negative_prompt_embeds`](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline.__call__.negative_prompt_embeds)) parameter, such as [`StableDiffusionPipeline`],
51_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-weighting
.md
parameter, such as [`StableDiffusionPipeline`], [`StableDiffusionControlNetPipeline`], and [`StableDiffusionXLPipeline`].
51_4_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-weighting
.md
<Tip> If your favorite pipeline doesn't have a `prompt_embeds` parameter, please open an [issue](https://github.com/huggingface/diffusers/issues/new/choose) so we can add it! </Tip> This guide will show you how to weight and blend your prompts with Compel in 🤗 Diffusers. Before you begin, make sure you have the latest version of Compel installed: ```py # uncomment to install in Colab #!pip install compel --upgrade ```
51_4_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-weighting
.md
```py # uncomment to install in Colab #!pip install compel --upgrade ``` For this guide, let's generate an image with the prompt `"a red cat playing with a ball"` using the [`StableDiffusionPipeline`]: ```py from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler import torch
51_4_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-weighting
.md
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_safetensors=True) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe.to("cuda") prompt = "a red cat playing with a ball" generator = torch.Generator(device="cpu").manual_seed(33)
51_4_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-weighting
.md
prompt = "a red cat playing with a ball" generator = torch.Generator(device="cpu").manual_seed(33) image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/compel/forest_0.png"/> </div>
51_4_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#weighting
.md
You'll notice there is no "ball" in the image! Let's use compel to upweight the concept of "ball" in the prompt. Create a [`Compel`](https://github.com/damian0815/compel/blob/main/doc/compel.md#compel-objects) object, and pass it a tokenizer and text encoder: ```py from compel import Compel
51_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#weighting
.md
compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) ``` compel uses `+` or `-` to increase or decrease the weight of a word in the prompt. To increase the weight of "ball": <Tip> `+` corresponds to the value `1.1`, `++` corresponds to `1.1^2`, and so on. Similarly, `-` corresponds to `0.9` and `--` corresponds to `0.9^2`. Feel free to experiment with adding more `+` or `-` in your prompt! </Tip> ```py prompt = "a red cat playing with a ball++" ```
51_5_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#weighting
.md
</Tip> ```py prompt = "a red cat playing with a ball++" ``` Pass the prompt to `compel_proc` to create the new prompt embeddings which are passed to the pipeline: ```py prompt_embeds = compel_proc(prompt) generator = torch.manual_seed(33)
51_5_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#weighting
.md
image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/compel/forest_1.png"/> </div> To downweight parts of the prompt, use the `-` suffix: ```py prompt = "a red------- cat playing with a ball" prompt_embeds = compel_proc(prompt) generator = torch.manual_seed(33)
51_5_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#weighting
.md
generator = torch.manual_seed(33) image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-neg.png"/> </div> You can even up or downweight multiple concepts in the same prompt: ```py prompt = "a red cat++ playing with a ball----" prompt_embeds = compel_proc(prompt)
51_5_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#weighting
.md
generator = torch.manual_seed(33) image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-pos-neg.png"/> </div>
51_5_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#blending
.md
You can also create a weighted *blend* of prompts by adding `.blend()` to a list of prompts and passing it some weights. Your blend may not always produce the result you expect because it breaks some assumptions about how the text encoder functions, so just have fun and experiment with it! ```py prompt_embeds = compel_proc('("a red cat playing with a ball", "jungle").blend(0.7, 0.8)') generator = torch.Generator(device="cuda").manual_seed(33)
51_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#blending
.md
image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-blend.png"/> </div>
51_6_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#conjunction
.md
A conjunction diffuses each prompt independently and concatenates their results by their weighted sum. Add `.and()` to the end of a list of prompts to create a conjunction: ```py prompt_embeds = compel_proc('["a red cat", "playing with a", "ball"].and()') generator = torch.Generator(device="cuda").manual_seed(55)
51_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#conjunction
.md
image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-conj.png"/> </div>
51_7_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#textual-inversion
.md
[Textual inversion](../training/text_inversion) is a technique for learning a specific concept from some images which you can use to generate new images conditioned on that concept. Create a pipeline and use the [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] function to load the textual inversion embeddings (feel free to browse the [Stable Diffusion Conceptualizer](https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer) for 100+ trained concepts): ```py
51_8_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#textual-inversion
.md
```py import torch from diffusers import StableDiffusionPipeline from compel import Compel, DiffusersTextualInversionManager
51_8_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#textual-inversion
.md
pipe = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, variant="fp16").to("cuda") pipe.load_textual_inversion("sd-concepts-library/midjourney-style") ``` Compel provides a `DiffusersTextualInversionManager` class to simplify prompt weighting with textual inversion. Instantiate `DiffusersTextualInversionManager` and pass it to the `Compel` class: ```py
51_8_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#textual-inversion
.md
```py textual_inversion_manager = DiffusersTextualInversionManager(pipe) compel_proc = Compel( tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder, textual_inversion_manager=textual_inversion_manager) ``` Incorporate the concept to condition a prompt with using the `<concept>` syntax: ```py prompt_embeds = compel_proc('("A red cat++ playing with a ball <midjourney-style>")')
51_8_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#textual-inversion
.md
image = pipe(prompt_embeds=prompt_embeds).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-text-inversion.png"/> </div>
51_8_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#dreambooth
.md
[DreamBooth](../training/dreambooth) is a technique for generating contextualized images of a subject given just a few images of the subject to train on. It is similar to textual inversion, but DreamBooth trains the full model whereas textual inversion only fine-tunes the text embeddings. This means you should use [`~DiffusionPipeline.from_pretrained`] to load the DreamBooth model (feel free to browse the [Stable Diffusion Dreambooth Concepts Library](https://huggingface.co/sd-dreambooth-library) for 100+
51_9_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#dreambooth
.md
(feel free to browse the [Stable Diffusion Dreambooth Concepts Library](https://huggingface.co/sd-dreambooth-library) for 100+ trained models):
51_9_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#dreambooth
.md
```py import torch from diffusers import DiffusionPipeline, UniPCMultistepScheduler from compel import Compel
51_9_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#dreambooth
.md
pipe = DiffusionPipeline.from_pretrained("sd-dreambooth-library/dndcoverart-v1", torch_dtype=torch.float16).to("cuda") pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ``` Create a `Compel` class with a tokenizer and text encoder, and pass your prompt to it. Depending on the model you use, you'll need to incorporate the model's unique identifier into your prompt. For example, the `dndcoverart-v1` model uses the identifier `dndcoverart`: ```py
51_9_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#dreambooth
.md
```py compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) prompt_embeds = compel_proc('("magazine cover of a dndcoverart dragon, high quality, intricate details, larry elmore art style").and()') image = pipe(prompt_embeds=prompt_embeds).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-dreambooth.png"/> </div>
51_9_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#stable-diffusion-xl
.md
Stable Diffusion XL (SDXL) has two tokenizers and text encoders so it's usage is a bit different. To address this, you should pass both tokenizers and encoders to the `Compel` class: ```py from compel import Compel, ReturnedEmbeddingsType from diffusers import DiffusionPipeline from diffusers.utils import make_image_grid import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", use_safetensors=True, torch_dtype=torch.float16 ).to("cuda")
51_10_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#stable-diffusion-xl
.md
compel = Compel( tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, requires_pooled=[False, True] ) ```
51_10_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#stable-diffusion-xl
.md
This time, let's upweight "ball" by a factor of 1.5 for the first prompt, and downweight "ball" by 0.6 for the second prompt. The [`StableDiffusionXLPipeline`] also requires [`pooled_prompt_embeds`](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLInpaintPipeline.__call__.pooled_prompt_embeds) (and optionally
51_10_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#stable-diffusion-xl
.md
(and optionally [`negative_pooled_prompt_embeds`](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLInpaintPipeline.__call__.negative_pooled_prompt_embeds)) so you should pass those to the pipeline along with the conditioning tensors:
51_10_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#stable-diffusion-xl
.md
```py # apply weights prompt = ["a red cat playing with a (ball)1.5", "a red cat playing with a (ball)0.6"] conditioning, pooled = compel(prompt)
51_10_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#stable-diffusion-xl
.md
# generate image generator = [torch.Generator().manual_seed(33) for _ in range(len(prompt))] images = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, generator=generator, num_inference_steps=30).images make_image_grid(images, rows=1, cols=2) ``` <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/compel/sdxl_ball1.png"/>
51_10_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#stable-diffusion-xl
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">"a red cat playing with a (ball)1.5"</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/compel/sdxl_ball2.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">"a red cat playing with a (ball)0.6"</figcaption> </div> </div>
51_10_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
52_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
52_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#load-adapters
.md
[[open-in-colab]] There are several [training](../training/overview) techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different.
52_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#load-adapters
.md
This guide will show you how to load DreamBooth, textual inversion, and LoRA weights. <Tip> Feel free to browse the [Stable Diffusion Conceptualizer](https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer), [LoRA the Explorer](https://huggingface.co/spaces/multimodalart/LoraTheExplorer), and the [Diffusers Models Gallery](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery) for checkpoints and embeddings to use. </Tip>
52_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#dreambooth
.md
[DreamBooth](https://dreambooth.github.io/) finetunes an *entire diffusion model* on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model.
52_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#dreambooth
.md
Let's load the [herge_style](https://huggingface.co/sd-dreambooth-library/herge-style) checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word `herge_style` in your prompt to trigger the checkpoint: ```py from diffusers import AutoPipelineForText2Image import torch
52_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#dreambooth
.md
pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" image = pipeline(prompt).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_dreambooth.png" /> </div>
52_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#textual-inversion
.md
[Textual inversion](https://textual-inversion.github.io/) is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file.
52_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#textual-inversion
.md
Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model. ```py from diffusers import AutoPipelineForText2Image import torch
52_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#textual-inversion
.md
pipeline = AutoPipelineForText2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") ``` Now you can load the textual inversion embeddings with the [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] method and generate some images. Let's load the [sd-concepts-library/gta5-artwork](https://huggingface.co/sd-concepts-library/gta5-artwork) embeddings and you'll need to include the special word `<gta5-artwork>` in your prompt to trigger it:
52_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#textual-inversion
.md
```py pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, <gta5-artwork> style" image = pipeline(prompt).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_txt_embed.png" /> </div>
52_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#textual-inversion
.md
</div> Textual inversion can also be trained on undesirable things to create *negative embeddings* to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You'll also load the embeddings with [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`], but this time, you'll need two more parameters:
52_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#textual-inversion
.md
- `weight_name`: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format - `token`: specifies the special word to use in the prompt to trigger the embeddings Let's load the [sayakpaul/EasyNegative-test](https://huggingface.co/sayakpaul/EasyNegative-test) embeddings: ```py pipeline.load_textual_inversion( "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" ) ```
52_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#textual-inversion
.md
"sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" ) ``` Now you can use the `token` to generate an image with the negative embeddings: ```py prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" negative_prompt = "EasyNegative"
52_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#textual-inversion
.md
image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png" /> </div>
52_3_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
[Low-Rank Adaptation (LoRA)](https://huggingface.co/papers/2106.09685) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store. <Tip>
52_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
<Tip> LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA. It is also increasingly common to load and merge multiple LoRAs to create new and unique images. You can learn more about it in the in-depth [Merge LoRAs](merge_loras) guide since merging is outside the scope of this loading guide. </Tip> LoRAs also need to be used with another model: ```py from diffusers import AutoPipelineForText2Image
52_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
</Tip> LoRAs also need to be used with another model: ```py from diffusers import AutoPipelineForText2Image import torch
52_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") ``` Then use the [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] method to load the [ostris/super-cereal-sdxl-lora](https://huggingface.co/ostris/super-cereal-sdxl-lora) weights and specify the weights filename from the repository: ```py pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors")
52_4_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
```py pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors") prompt = "bears, pizza bites" image = pipeline(prompt).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_lora.png" /> </div>
52_4_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_lora.png" /> </div> The [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: - the LoRA weights don't have separate identifiers for the UNet and text encoder - the LoRA weights have separate identifiers for the UNet and text encoder
52_4_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
To directly load (and save) a LoRA adapter at the *model-level*, use [`~PeftAdapterMixin.load_lora_adapter`], which builds and prepares the necessary model configuration for the adapter. Like [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`], [`PeftAdapterMixin.load_lora_adapter`] can load LoRAs for both the UNet and text encoder. For example, if you're loading a LoRA for the UNet, [`PeftAdapterMixin.load_lora_adapter`] ignores the keys for the text encoder.
52_4_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
Use the `weight_name` parameter to specify the specific weight file and the `prefix` parameter to filter for the appropriate state dicts (`"unet"` in this case) to load. ```py from diffusers import AutoPipelineForText2Image import torch
52_4_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") pipeline.unet.load_lora_adapter("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", prefix="unet")
52_4_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
# use cnmt in the prompt to trigger the LoRA prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" image = pipeline(prompt).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_attn_proc.png" /> </div> Save an adapter with [`~PeftAdapterMixin.save_lora_adapter`].
52_4_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#lora
.md
</div> Save an adapter with [`~PeftAdapterMixin.save_lora_adapter`]. To unload the LoRA weights, use the [`~loaders.StableDiffusionLoraLoaderMixin.unload_lora_weights`] method to discard the LoRA weights and restore the model to its original weights: ```py pipeline.unload_lora_weights() ```
52_4_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#adjust-lora-weight-scale
.md
For both [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] and [`~loaders.UNet2DConditionLoadersMixin.load_attn_procs`], you can pass the `cross_attention_kwargs={"scale": 0.5}` parameter to adjust how much of the LoRA weights to use. A value of `0` is the same as only using the base model weights, and a value of `1` is equivalent to using the fully finetuned LoRA.
52_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#adjust-lora-weight-scale
.md
For more granular control on the amount of LoRA weights used per layer, you can use [`~loaders.StableDiffusionLoraLoaderMixin.set_adapters`] and pass a dictionary specifying by how much to scale the weights in each layer by. ```python pipe = ... # create pipeline pipe.load_lora_weights(..., adapter_name="my_adapter") scales = { "text_encoder": 0.5, "text_encoder_2": 0.5, # only usable if pipe has a 2nd text encoder "unet": { "down": 0.9, # all transformers in the down-part will use scale 0.9
52_5_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#adjust-lora-weight-scale
.md
"unet": { "down": 0.9, # all transformers in the down-part will use scale 0.9 # "mid" # in this example "mid" is not given, therefore all transformers in the mid part will use the default scale 1.0 "up": { "block_0": 0.6, # all 3 transformers in the 0th block in the up-part will use scale 0.6 "block_1": [0.4, 0.8, 1.0], # the 3 transformers in the 1st block in the up-part will use scales 0.4, 0.8 and 1.0 respectively } } } pipe.set_adapters("my_adapter", scales) ```
52_5_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#adjust-lora-weight-scale
.md
} } } pipe.set_adapters("my_adapter", scales) ``` This also works with multiple adapters - see [this guide](https://huggingface.co/docs/diffusers/tutorials/using_peft_for_inference#customize-adapters-strength) for how to do it. <Tip warning={true}> Currently, [`~loaders.StableDiffusionLoraLoaderMixin.set_adapters`] only supports scaling attention weights. If a LoRA has other parts (e.g., resnets or down-/upsamplers), they will keep a scale of 1.0. </Tip>
52_5_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#kohya-and-thelastben
.md
Other popular LoRA trainers from the community include those by [Kohya](https://github.com/kohya-ss/sd-scripts/) and [TheLastBen](https://github.com/TheLastBen/fast-stable-diffusion). These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way. <hfoptions id="other-trainers"> <hfoption id="Kohya">
52_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#kohya-and-thelastben
.md
<hfoptions id="other-trainers"> <hfoption id="Kohya"> To load a Kohya LoRA, let's download the [Blueprintify SD XL 1.0](https://civitai.com/models/150986/blueprintify-sd-xl-10) checkpoint from [Civitai](https://civitai.com/) as an example: ```sh !wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors ``` Load the LoRA checkpoint with the [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] method, and specify the filename in the `weight_name` parameter:
52_6_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#kohya-and-thelastben
.md
```py from diffusers import AutoPipelineForText2Image import torch
52_6_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#kohya-and-thelastben
.md
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") ``` Generate an image: ```py # use bl3uprint in the prompt to trigger the LoRA prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" image = pipeline(prompt).images[0] image ```
52_6_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#kohya-and-thelastben
.md
image = pipeline(prompt).images[0] image ``` <Tip warning={true}> Some limitations of using Kohya LoRAs with 🤗 Diffusers include: - Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained [here](https://github.com/huggingface/diffusers/pull/4287/#issuecomment-1655110736).
52_6_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#kohya-and-thelastben
.md
- [LyCORIS checkpoints](https://github.com/KohakuBlueleaf/LyCORIS) aren't fully supported. The [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported. </Tip> </hfoption> <hfoption id="TheLastBen"> Loading a checkpoint from TheLastBen is very similar. For example, to load the [TheLastBen/William_Eggleston_Style_SDXL](https://huggingface.co/TheLastBen/William_Eggleston_Style_SDXL) checkpoint:
52_6_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#kohya-and-thelastben
.md
```py from diffusers import AutoPipelineForText2Image import torch
52_6_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#kohya-and-thelastben
.md
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") pipeline.load_lora_weights("TheLastBen/William_Eggleston_Style_SDXL", weight_name="wegg.safetensors") # use by william eggleston in the prompt to trigger the LoRA prompt = "a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful" image = pipeline(prompt=prompt).images[0] image ``` </hfoption> </hfoptions>
52_6_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter
.md
[IP-Adapter](https://ip-adapter.github.io/) is a lightweight adapter that enables image prompting for any diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs.
52_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter
.md
You can learn more about how to use IP-Adapter for different tasks and specific use cases in the [IP-Adapter](../using-diffusers/ip_adapter) guide. > [!TIP] > Diffusers currently only supports IP-Adapter for some of the most popular pipelines. Feel free to open a feature request if you have a cool use case and want to integrate IP-Adapter with an unsupported pipeline! > Official IP-Adapter checkpoints are available from [h94/IP-Adapter](https://huggingface.co/h94/IP-Adapter).
52_7_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter
.md
> Official IP-Adapter checkpoints are available from [h94/IP-Adapter](https://huggingface.co/h94/IP-Adapter). To start, load a Stable Diffusion checkpoint. ```py from diffusers import AutoPipelineForText2Image import torch from diffusers.utils import load_image
52_7_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter
.md
pipeline = AutoPipelineForText2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") ``` Then load the IP-Adapter weights and add it to the pipeline with the [`~loaders.IPAdapterMixin.load_ip_adapter`] method. ```py pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") ``` Once loaded, you can use the pipeline with an image and text prompt to guide the image generation process. ```py
52_7_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter
.md
``` Once loaded, you can use the pipeline with an image and text prompt to guide the image generation process. ```py image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png") generator = torch.Generator(device="cpu").manual_seed(33) images = pipeline( prompt='best quality, high quality, wearing sunglasses', ip_adapter_image=image, negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", num_inference_steps=50,
52_7_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter
.md
ip_adapter_image=image, negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", num_inference_steps=50, generator=generator, ).images[0] images ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip-bear.png" /> </div>
52_7_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter-plus
.md
IP-Adapter relies on an image encoder to generate image features. If the IP-Adapter repository contains an `image_encoder` subfolder, the image encoder is automatically loaded and registered to the pipeline. Otherwise, you'll need to explicitly load the image encoder with a [`~transformers.CLIPVisionModelWithProjection`] model and pass it to the pipeline. This is the case for *IP-Adapter Plus* checkpoints which use the ViT-H image encoder. ```py from transformers import CLIPVisionModelWithProjection
52_8_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter-plus
.md
image_encoder = CLIPVisionModelWithProjection.from_pretrained( "h94/IP-Adapter", subfolder="models/image_encoder", torch_dtype=torch.float16 ) pipeline = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", image_encoder=image_encoder, torch_dtype=torch.float16 ).to("cuda") pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.safetensors") ```
52_8_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter-face-id-models
.md
The IP-Adapter FaceID models are experimental IP Adapters that use image embeddings generated by `insightface` instead of CLIP image embeddings. Some of these models also use LoRA to improve ID consistency. You need to install `insightface` and all its requirements to use these models. <Tip warning={true}> As InsightFace pretrained models are available for non-commercial research purposes, IP-Adapter-FaceID models are released exclusively for research purposes and are not intended for commercial use.
52_9_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter-face-id-models
.md
</Tip> ```py pipeline = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda")
52_9_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter-face-id-models
.md
pipeline.load_ip_adapter("h94/IP-Adapter-FaceID", subfolder=None, weight_name="ip-adapter-faceid_sdxl.bin", image_encoder_folder=None) ``` If you want to use one of the two IP-Adapter FaceID Plus models, you must also load the CLIP image encoder, as this models use both `insightface` and CLIP image embeddings to achieve better photorealism. ```py from transformers import CLIPVisionModelWithProjection
52_9_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/loading_adapters.md
https://huggingface.co/docs/diffusers/en/using-diffusers/loading_adapters/#ip-adapter-face-id-models
.md
image_encoder = CLIPVisionModelWithProjection.from_pretrained( "laion/CLIP-ViT-H-14-laion2B-s32B-b79K", torch_dtype=torch.float16, ) pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", image_encoder=image_encoder, torch_dtype=torch.float16 ).to("cuda") pipeline.load_ip_adapter("h94/IP-Adapter-FaceID", subfolder=None, weight_name="ip-adapter-faceid-plus_sd15.bin") ```
52_9_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
53_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]]
53_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#trajectory-consistency-distillation-lora
.md
Trajectory Consistency Distillation (TCD) enables a model to generate higher quality and more detailed images with fewer steps. Moreover, owing to the effective error mitigation during the distillation process, TCD demonstrates superior performance even under conditions of large inference steps. The major advantages of TCD are:
53_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#trajectory-consistency-distillation-lora
.md
The major advantages of TCD are: - Better than Teacher: TCD demonstrates superior generative quality at both small and large inference steps and exceeds the performance of [DPM-Solver++(2S)](../../api/schedulers/multistep_dpm_solver) with Stable Diffusion XL (SDXL). There is no additional discriminator or LPIPS supervision included during TCD training. - Flexible Inference Steps: The inference steps for TCD sampling can be freely adjusted without adversely affecting the image quality.
53_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#trajectory-consistency-distillation-lora
.md
- Freely change detail level: During inference, the level of detail in the image can be adjusted with a single hyperparameter, *gamma*. > [!TIP] > For more technical details of TCD, please refer to the [paper](https://arxiv.org/abs/2402.19159) or official [project page](https://mhh0318.github.io/tcd/)).
53_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#trajectory-consistency-distillation-lora
.md
For large models like SDXL, TCD is trained with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) to reduce memory usage. This is also useful because you can reuse LoRAs between different finetuned models, as long as they share the same base model, without further training.
53_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#trajectory-consistency-distillation-lora
.md
This guide will show you how to perform inference with TCD-LoRAs for a variety of tasks like text-to-image and inpainting, as well as how you can easily combine TCD-LoRAs with other adapters. Choose one of the supported base model and it's corresponding TCD-LoRA checkpoint from the table below to get started. | Base model | TCD-LoRA checkpoint |
53_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#trajectory-consistency-distillation-lora
.md
|-------------------------------------------------------------------------------------------------|----------------------------------------------------------------| | [stable-diffusion-v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) | [TCD-SD15](https://huggingface.co/h1t/TCD-SD15-LoRA) | | [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) | [TCD-SD21-base](https://huggingface.co/h1t/TCD-SD21-base-LoRA) |
53_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#trajectory-consistency-distillation-lora
.md
| [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) | [TCD-SDXL](https://huggingface.co/h1t/TCD-SDXL-LoRA) | Make sure you have [PEFT](https://github.com/huggingface/peft) installed for better LoRA support. ```bash pip install -U peft ```
53_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#general-tasks
.md
In this guide, let's use the [`StableDiffusionXLPipeline`] and the [`TCDScheduler`]. Use the [`~StableDiffusionPipeline.load_lora_weights`] method to load the SDXL-compatible TCD-LoRA weights. A few tips to keep in mind for TCD-LoRA inference are to: - Keep the `num_inference_steps` between 4 and 50
53_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#general-tasks
.md
A few tips to keep in mind for TCD-LoRA inference are to: - Keep the `num_inference_steps` between 4 and 50 - Set `eta` (used to control stochasticity at each step) between 0 and 1. You should use a higher `eta` when increasing the number of inference steps, but the downside is that a larger `eta` in [`TCDScheduler`] leads to blurrier images. A value of 0.3 is recommended to produce good results. <hfoptions id="tasks"> <hfoption id="text-to-image"> ```python import torch
53_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#general-tasks
.md
<hfoptions id="tasks"> <hfoption id="text-to-image"> ```python import torch from diffusers import StableDiffusionXLPipeline, TCDScheduler
53_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#general-tasks
.md
device = "cuda" base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" tcd_lora_id = "h1t/TCD-SDXL-LoRA" pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device) pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(tcd_lora_id) pipe.fuse_lora()
53_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#general-tasks
.md
pipe.load_lora_weights(tcd_lora_id) pipe.fuse_lora() prompt = "Painting of the orange cat Otto von Garfield, Count of Bismarck-Schönhausen, Duke of Lauenburg, Minister-President of Prussia. Depicted wearing a Prussian Pickelhaube and eating his favorite meal - lasagna."
53_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md
https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#general-tasks
.md
image = pipe( prompt=prompt, num_inference_steps=4, guidance_scale=0, eta=0.3, generator=torch.Generator(device=device).manual_seed(0), ).images[0] ``` ![](https://github.com/jabir-zheng/TCD/raw/main/assets/demo_image.png) </hfoption> <hfoption id="inpainting"> ```python import torch from diffusers import AutoPipelineForInpainting, TCDScheduler from diffusers.utils import load_image, make_image_grid
53_2_5