source
stringclasses 273
values | url
stringlengths 47
172
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#controlnet
|
.md
|
<Tip warning={true}>
⚠️ ControlNet is only supported for Kandinsky 2.2!
</Tip>
ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image.
Let's load an image and extract it's depth map:
```py
from diffusers.utils import load_image
|
54_6_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#controlnet
|
.md
|
img = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png"
).resize((768, 768))
img
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png"/>
</div>
Then you can use the `depth-estimation` [`~transformers.Pipeline`] from 🤗 Transformers to process the image and retrieve the depth map:
```py
import torch
import numpy as np
|
54_6_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#controlnet
|
.md
|
from transformers import pipeline
def make_hint(image, depth_estimator):
image = depth_estimator(image)["depth"]
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
detected_map = torch.from_numpy(image).float() / 255.0
hint = detected_map.permute(2, 0, 1)
return hint
depth_estimator = pipeline("depth-estimation")
hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
```
|
54_6_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image-controlnet-text-to-image
|
.md
|
Load the prior pipeline and the [`KandinskyV22ControlnetPipeline`]:
```py
from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline
prior_pipeline = KandinskyV22PriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True
).to("cuda")
|
54_7_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image-controlnet-text-to-image
|
.md
|
pipeline = KandinskyV22ControlnetPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
).to("cuda")
```
Generate the image embeddings from a prompt and negative prompt:
```py
prompt = "A robot, 4k photo"
|
54_7_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image-controlnet-text-to-image
|
.md
|
negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"
|
54_7_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image-controlnet-text-to-image
|
.md
|
generator = torch.Generator(device="cuda").manual_seed(43)
|
54_7_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image-controlnet-text-to-image
|
.md
|
image_emb, zero_image_emb = prior_pipeline(
prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator
).to_tuple()
```
Finally, pass the image embeddings and the depth image to the [`KandinskyV22ControlnetPipeline`] to generate an image:
```py
image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0]
image
```
<div class="flex justify-center">
|
54_7_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image-controlnet-text-to-image
|
.md
|
image
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/robot_cat_text2img.png"/>
</div>
|
54_7_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image-controlnet-image-to-image
|
.md
|
For image-to-image with ControlNet, you'll need to use the:
- [`KandinskyV22PriorEmb2EmbPipeline`] to generate the image embeddings from a text prompt and an image
- [`KandinskyV22ControlnetImg2ImgPipeline`] to generate an image from the initial image and the image embeddings
Process and extract a depth map of an initial image of a cat with the `depth-estimation` [`~transformers.Pipeline`] from 🤗 Transformers:
```py
import torch
import numpy as np
|
54_8_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image-controlnet-image-to-image
|
.md
|
from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline
from diffusers.utils import load_image
from transformers import pipeline
img = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png"
).resize((768, 768))
|
54_8_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image-controlnet-image-to-image
|
.md
|
def make_hint(image, depth_estimator):
image = depth_estimator(image)["depth"]
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
detected_map = torch.from_numpy(image).float() / 255.0
hint = detected_map.permute(2, 0, 1)
return hint
|
54_8_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image-controlnet-image-to-image
|
.md
|
depth_estimator = pipeline("depth-estimation")
hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
```
Load the prior pipeline and the [`KandinskyV22ControlnetImg2ImgPipeline`]:
```py
prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True
).to("cuda")
|
54_8_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image-controlnet-image-to-image
|
.md
|
pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
).to("cuda")
```
Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings:
```py
prompt = "A robot, 4k photo"
|
54_8_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image-controlnet-image-to-image
|
.md
|
negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"
|
54_8_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image-controlnet-image-to-image
|
.md
|
generator = torch.Generator(device="cuda").manual_seed(43)
|
54_8_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image-controlnet-image-to-image
|
.md
|
img_emb = prior_pipeline(prompt=prompt, image=img, strength=0.85, generator=generator)
negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength=1, generator=generator)
```
Now you can run the [`KandinskyV22ControlnetImg2ImgPipeline`] to generate an image from the initial image and the image embeddings:
```py
|
54_8_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image-controlnet-image-to-image
|
.md
|
```py
image = pipeline(image=img, strength=0.5, image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0]
make_image_grid([img.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2)
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/robot_cat.png"/>
</div>
|
54_8_8
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#optimizations
|
.md
|
Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference.
1. Enable [xFormers](../optimization/xformers) if you're using PyTorch < 2.0:
```diff
from diffusers import DiffusionPipeline
import torch
|
54_9_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#optimizations
|
.md
|
pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
+ pipe.enable_xformers_memory_efficient_attention()
```
2. Enable `torch.compile` if you're using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA):
```diff
pipe.unet.to(memory_format=torch.channels_last)
+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
|
54_9_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#optimizations
|
.md
|
+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
This is the same as explicitly setting the attention processor to use [`~models.attention_processor.AttnAddedKVProcessor2_0`]:
```py
from diffusers.models.attention_processor import AttnAddedKVProcessor2_0
|
54_9_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#optimizations
|
.md
|
pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0())
```
3. Offload the model to the CPU with [`~KandinskyPriorPipeline.enable_model_cpu_offload`] to avoid out-of-memory errors:
```diff
from diffusers import DiffusionPipeline
import torch
|
54_9_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#optimizations
|
.md
|
pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
+ pipe.enable_model_cpu_offload()
```
4. By default, the text-to-image pipeline uses the [`DDIMScheduler`] but you can replace it with another scheduler like [`DDPMScheduler`] to see how that affects the tradeoff between inference speed and image quality:
```py
from diffusers import DDPMScheduler
from diffusers import DiffusionPipeline
|
54_9_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#optimizations
|
.md
|
scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler")
pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16, use_safetensors=True).to("cuda")
```
|
54_9_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
55_0_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
|
55_0_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#scheduler-features
|
.md
|
The scheduler is an important component of any diffusion model because it controls the entire denoising (or sampling) process. There are many types of schedulers, some are optimized for speed and some for quality. With Diffusers, you can modify the scheduler configuration to use custom noise schedules, sigmas, and rescale the noise schedule. Changing these parameters can have profound effects on inference quality and speed.
|
55_1_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#scheduler-features
|
.md
|
This guide will demonstrate how to use these features to improve inference quality.
> [!TIP]
> Diffusers currently only supports the `timesteps` and `sigmas` parameters for a select list of schedulers and pipelines. Feel free to open a [feature request](https://github.com/huggingface/diffusers/issues/new/choose) if you want to extend these parameters to a scheduler and pipeline that does not currently support it!
|
55_1_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-schedules
|
.md
|
The timestep or noise schedule determines the amount of noise at each sampling step. The scheduler uses this to generate an image with the corresponding amount of noise at each step. The timestep schedule is generated from the scheduler's default configuration, but you can customize the scheduler to use new and optimized sampling schedules that aren't in Diffusers yet.
|
55_2_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-schedules
|
.md
|
For example, [Align Your Steps (AYS)](https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/) is a method for optimizing a sampling schedule to generate a high-quality image in as little as 10 steps. The optimal [10-step schedule](https://github.com/huggingface/diffusers/blob/a7bf77fc284810483f1e60afe34d1d27ad91ce2e/src/diffusers/schedulers/scheduling_utils.py#L51) for Stable Diffusion XL is:
```py
from diffusers.schedulers import AysSchedules
|
55_2_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-schedules
|
.md
|
sampling_schedule = AysSchedules["StableDiffusionXLTimesteps"]
print(sampling_schedule)
"[999, 845, 730, 587, 443, 310, 193, 116, 53, 13]"
```
You can use the AYS sampling schedule in a pipeline by passing it to the `timesteps` parameter.
```py
pipeline = StableDiffusionXLPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, algorithm_type="sde-dpmsolver++")
|
55_2_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-schedules
|
.md
|
prompt = "A cinematic shot of a cute little rabbit wearing a jacket and doing a thumbs up"
generator = torch.Generator(device="cpu").manual_seed(2487854446)
image = pipeline(
prompt=prompt,
negative_prompt="",
generator=generator,
timesteps=sampling_schedule,
).images[0]
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ays.png"/>
|
55_2_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-schedules
|
.md
|
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ays.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">AYS timestep schedule 10 steps</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/10.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">Linearly-spaced timestep schedule 10 steps</figcaption>
</div>
<div>
|
55_2_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-schedules
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">Linearly-spaced timestep schedule 10 steps</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/25.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">Linearly-spaced timestep schedule 25 steps</figcaption>
</div>
</div>
|
55_2_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-spacing
|
.md
|
The way sample steps are selected in the schedule can affect the quality of the generated image, especially with respect to [rescaling the noise schedule](#rescale-noise-schedule), which can enable a model to generate much brighter or darker images. Diffusers provides three timestep spacing methods:
- `leading` creates evenly spaced steps
- `linspace` includes the first and last steps and evenly selects the remaining intermediate steps
|
55_3_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-spacing
|
.md
|
- `linspace` includes the first and last steps and evenly selects the remaining intermediate steps
- `trailing` only includes the last step and evenly selects the remaining intermediate steps starting from the end
It is recommended to use the `trailing` spacing method because it generates higher quality images with more details when there are fewer sample steps. But the difference in quality is not as obvious for more standard sample step values.
```py
import torch
|
55_3_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-spacing
|
.md
|
```py
import torch
from diffusers import StableDiffusionXLPipeline, DPMSolverMultistepScheduler
|
55_3_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-spacing
|
.md
|
pipeline = StableDiffusionXLPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, timestep_spacing="trailing")
|
55_3_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-spacing
|
.md
|
prompt = "A cinematic shot of a cute little black cat sitting on a pumpkin at night"
generator = torch.Generator(device="cpu").manual_seed(2487854446)
image = pipeline(
prompt=prompt,
negative_prompt="",
generator=generator,
num_inference_steps=5,
).images[0]
image
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/trailing_spacing.png"/>
|
55_3_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#timestep-spacing
|
.md
|
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/trailing_spacing.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">trailing spacing after 5 steps</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/leading_spacing.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">leading spacing after 5 steps</figcaption>
</div>
</div>
|
55_3_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#sigmas
|
.md
|
The `sigmas` parameter is the amount of noise added at each timestep according to the timestep schedule. Like the `timesteps` parameter, you can customize the `sigmas` parameter to control how much noise is added at each step. When you use a custom `sigmas` value, the `timesteps` are calculated from the custom `sigmas` value and the default scheduler configuration is ignored.
|
55_4_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#sigmas
|
.md
|
For example, you can manually pass the [sigmas](https://github.com/huggingface/diffusers/blob/6529ee67ec02fcf58d2fd9242164ea002b351d75/src/diffusers/schedulers/scheduling_utils.py#L55) for something like the 10-step AYS schedule from before to the pipeline.
```py
import torch
|
55_4_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#sigmas
|
.md
|
from diffusers import DiffusionPipeline, EulerDiscreteScheduler
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipeline = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
|
55_4_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#sigmas
|
.md
|
sigmas = [14.615, 6.315, 3.771, 2.181, 1.342, 0.862, 0.555, 0.380, 0.234, 0.113, 0.0]
prompt = "anthropomorphic capybara wearing a suit and working with a computer"
generator = torch.Generator(device='cuda').manual_seed(123)
image = pipeline(
prompt=prompt,
num_inference_steps=10,
sigmas=sigmas,
generator=generator
).images[0]
```
|
55_4_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#sigmas
|
.md
|
image = pipeline(
prompt=prompt,
num_inference_steps=10,
sigmas=sigmas,
generator=generator
).images[0]
```
When you take a look at the scheduler's `timesteps` parameter, you'll see that it is the same as the AYS timestep schedule because the `timestep` schedule is calculated from the `sigmas`.
```py
print(f" timesteps: {pipe.scheduler.timesteps}")
"timesteps: tensor([999., 845., 730., 587., 443., 310., 193., 116., 53., 13.], device='cuda:0')"
```
|
55_4_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#karras-sigmas
|
.md
|
> [!TIP]
> Refer to the scheduler API [overview](../api/schedulers/overview) for a list of schedulers that support Karras sigmas.
>
> Karras sigmas should not be used for models that weren't trained with them. For example, the base Stable Diffusion XL model shouldn't use Karras sigmas but the [DreamShaperXL](https://hf.co/Lykon/dreamshaper-xl-1-0) model can since they are trained with Karras sigmas.
|
55_5_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#karras-sigmas
|
.md
|
Karras scheduler's use the timestep schedule and sigmas from the [Elucidating the Design Space of Diffusion-Based Generative Models](https://hf.co/papers/2206.00364) paper. This scheduler variant applies a smaller amount of noise per step as it approaches the end of the sampling process compared to other schedulers, and can increase the level of details in the generated image.
Enable Karras sigmas by setting `use_karras_sigmas=True` in the scheduler.
```py
import torch
|
55_5_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#karras-sigmas
|
.md
|
Enable Karras sigmas by setting `use_karras_sigmas=True` in the scheduler.
```py
import torch
from diffusers import StableDiffusionXLPipeline, DPMSolverMultistepScheduler
|
55_5_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#karras-sigmas
|
.md
|
pipeline = StableDiffusionXLPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, algorithm_type="sde-dpmsolver++", use_karras_sigmas=True)
|
55_5_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#karras-sigmas
|
.md
|
prompt = "A cinematic shot of a cute little rabbit wearing a jacket and doing a thumbs up"
generator = torch.Generator(device="cpu").manual_seed(2487854446)
image = pipeline(
prompt=prompt,
negative_prompt="",
generator=generator,
).images[0]
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/karras_sigmas_true.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">Karras sigmas enabled</figcaption>
</div>
<div>
|
55_5_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#karras-sigmas
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">Karras sigmas enabled</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/karras_sigmas_false.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">Karras sigmas disabled</figcaption>
</div>
</div>
|
55_5_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#rescale-noise-schedule
|
.md
|
In the [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://hf.co/papers/2305.08891) paper, the authors discovered that common noise schedules allowed some signal to leak into the last timestep. This signal leakage at inference can cause models to only generate images with medium brightness. By enforcing a zero signal-to-noise ratio (SNR) for the timstep schedule and sampling from the last timestep, the model can be improved to generate very bright or dark images.
> [!TIP]
|
55_6_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#rescale-noise-schedule
|
.md
|
> [!TIP]
> For inference, you need a model that has been trained with *v_prediction*. To train your own model with *v_prediction*, add the following flag to the [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) or [train_text_to_image_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) scripts.
>
> ```bash
> --prediction_type="v_prediction"
> ```
|
55_6_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#rescale-noise-schedule
|
.md
|
>
> ```bash
> --prediction_type="v_prediction"
> ```
For example, load the [ptx0/pseudo-journey-v2](https://hf.co/ptx0/pseudo-journey-v2) checkpoint which was trained with `v_prediction` and the [`DDIMScheduler`]. Configure the following parameters in the [`DDIMScheduler`]:
* `rescale_betas_zero_snr=True` to rescale the noise schedule to zero SNR
* `timestep_spacing="trailing"` to start sampling from the last timestep
|
55_6_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#rescale-noise-schedule
|
.md
|
* `timestep_spacing="trailing"` to start sampling from the last timestep
Set `guidance_rescale` in the pipeline to prevent over-exposure. A lower value increases brightness but some of the details may appear washed out.
```py
from diffusers import DiffusionPipeline, DDIMScheduler
|
55_6_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#rescale-noise-schedule
|
.md
|
pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True)
|
55_6_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#rescale-noise-schedule
|
.md
|
pipeline.scheduler = DDIMScheduler.from_config(
pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
)
pipeline.to("cuda")
prompt = "cinematic photo of a snowy mountain at night with the northern lights aurora borealis overhead, 35mm photograph, film, professional, 4k, highly detailed"
generator = torch.Generator(device="cpu").manual_seed(23)
image = pipeline(prompt, guidance_rescale=0.7, generator=generator).images[0]
image
```
<div class="flex gap-4">
<div>
|
55_6_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#rescale-noise-schedule
|
.md
|
image = pipeline(prompt, guidance_rescale=0.7, generator=generator).images[0]
image
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/no-zero-snr.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">default Stable Diffusion v2-1 image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/zero-snr.png"/>
|
55_6_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/scheduler_features.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/scheduler_features/#rescale-noise-schedule
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">image with zero SNR and trailing timestep spacing enabled</figcaption>
</div>
</div>
|
55_6_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
56_0_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
|
56_0_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#reproducible-pipelines
|
.md
|
Diffusion models are inherently random which is what allows it to generate different outputs every time it is run. But there are certain times when you want to generate the same output every time, like when you're testing, replicating results, and even [improving image quality](#deterministic-batch-generation). While you can't expect to get identical results across platforms, you can expect reproducible results across releases and platforms within a certain tolerance range (though even this may vary).
|
56_1_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#reproducible-pipelines
|
.md
|
This guide will show you how to control randomness for deterministic generation on a CPU and GPU.
> [!TIP]
> We strongly recommend reading PyTorch's [statement about reproducibility](https://pytorch.org/docs/stable/notes/randomness.html):
>
> "Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds."
|
56_1_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
During inference, pipelines rely heavily on random sampling operations which include creating the
Gaussian noise tensors to denoise and adding noise to the scheduling step.
Take a look at the tensor values in the [`DDIMPipeline`] after two inference steps.
```python
from diffusers import DDIMPipeline
import numpy as np
|
56_2_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
ddim = DDIMPipeline.from_pretrained( "google/ddpm-cifar10-32", use_safetensors=True)
image = ddim(num_inference_steps=2, output_type="np").images
print(np.abs(image).sum())
```
Running the code above prints one value, but if you run it again you get a different value.
|
56_2_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
print(np.abs(image).sum())
```
Running the code above prints one value, but if you run it again you get a different value.
Each time the pipeline is run, [torch.randn](https://pytorch.org/docs/stable/generated/torch.randn.html) uses a different random seed to create the Gaussian noise tensors. This leads to a different result each time it is run and enables the diffusion pipeline to generate a different random image each time.
|
56_2_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
But if you need to reliably generate the same image, that depends on whether you're running the pipeline on a CPU or GPU.
> [!TIP]
|
56_2_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
> It might seem unintuitive to pass `Generator` objects to a pipeline instead of the integer value representing the seed. However, this is the recommended design when working with probabilistic models in PyTorch because a `Generator` is a *random state* that can be passed to multiple pipelines in a sequence. As soon as the `Generator` is consumed, the *state* is changed in place which means even if you passed the same `Generator` to a different pipeline, it won't produce the same result because the state
|
56_2_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
which means even if you passed the same `Generator` to a different pipeline, it won't produce the same result because the state is already changed.
|
56_2_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
<hfoptions id="hardware">
<hfoption id="CPU">
To generate reproducible results on a CPU, you'll need to use a PyTorch [Generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) and set a seed. Now when you run the code, it always prints a value of `1491.1711` because the `Generator` object with the seed is passed to all the random functions in the pipeline. You should get a similar, if not the same, result on whatever hardware and PyTorch version you're using.
```python
import torch
|
56_2_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
```python
import torch
import numpy as np
from diffusers import DDIMPipeline
|
56_2_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
ddim = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True)
generator = torch.Generator(device="cpu").manual_seed(0)
image = ddim(num_inference_steps=2, output_type="np", generator=generator).images
print(np.abs(image).sum())
```
</hfoption>
<hfoption id="GPU">
|
56_2_8
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
```
</hfoption>
<hfoption id="GPU">
Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example from the CPU example, you'll get a different result even though the seed is identical. This is because the GPU uses a different random number generator than the CPU.
|
56_2_9
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
```python
import torch
import numpy as np
from diffusers import DDIMPipeline
|
56_2_10
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
ddim = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True)
ddim.to("cuda")
generator = torch.Generator(device="cuda").manual_seed(0)
image = ddim(num_inference_steps=2, output_type="np", generator=generator).images
print(np.abs(image).sum())
```
|
56_2_11
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
print(np.abs(image).sum())
```
To avoid this issue, Diffusers has a [`~utils.torch_utils.randn_tensor`] function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The [`~utils.torch_utils.randn_tensor`] function is used everywhere inside the pipeline. Now you can call [torch.manual_seed](https://pytorch.org/docs/stable/generated/torch.manual_seed.html) which automatically creates a CPU `Generator` that can be passed to the pipeline even if it is being run on a GPU.
|
56_2_12
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
```python
import torch
import numpy as np
from diffusers import DDIMPipeline
|
56_2_13
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
ddim = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True)
ddim.to("cuda")
generator = torch.manual_seed(0)
image = ddim(num_inference_steps=2, output_type="np", generator=generator).images
print(np.abs(image).sum())
```
> [!TIP]
> If reproducibility is important to your use case, we recommend always passing a CPU `Generator`. The performance loss is often negligible and you'll generate more similar values than if the pipeline had been run on a GPU.
|
56_2_14
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#control-randomness
|
.md
|
Finally, more complex pipelines such as [`UnCLIPPipeline`], are often extremely
susceptible to precision error propagation. You'll need to use
exactly the same hardware and PyTorch version for full reproducibility.
</hfoption>
</hfoptions>
|
56_2_15
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-algorithms
|
.md
|
You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. The downside is that deterministic algorithms may be slower than non-deterministic ones and you may observe a decrease in performance.
|
56_3_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-algorithms
|
.md
|
Non-deterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment variable [CUBLAS_WORKSPACE_CONFIG](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility) to `:16:8` to only use one buffer size during runtime.
|
56_3_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-algorithms
|
.md
|
PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Set Diffusers [enable_full_determinism](https://github.com/huggingface/diffusers/blob/142f353e1c638ff1d20bd798402b68f72c1ebbdd/src/diffusers/utils/testing_utils.py#L861) to enable deterministic algorithms.
```py
enable_full_determinism()
```
|
56_3_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-algorithms
|
.md
|
```py
enable_full_determinism()
```
Now when you run the same pipeline twice, you'll get identical results.
```py
import torch
from diffusers import DDIMScheduler, StableDiffusionPipeline
|
56_3_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-algorithms
|
.md
|
pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", use_safetensors=True).to("cuda")
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
g = torch.Generator(device="cuda")
prompt = "A bear is playing a guitar on Times Square"
g.manual_seed(0)
result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images
g.manual_seed(0)
result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images
|
56_3_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-algorithms
|
.md
|
g.manual_seed(0)
result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images
print("L_inf dist =", abs(result1 - result2).max())
"L_inf dist = tensor(0., device='cuda:0')"
```
|
56_3_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-batch-generation
|
.md
|
A practical application of creating reproducible pipelines is *deterministic batch generation*. You generate a batch of images and select one image to improve with a more detailed prompt. The main idea is to pass a list of [Generator's](https://pytorch.org/docs/stable/generated/torch.Generator.html) to the pipeline and tie each `Generator` to a seed so you can reuse it.
|
56_4_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-batch-generation
|
.md
|
Let's use the [stable-diffusion-v1-5/stable-diffusion-v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) checkpoint and generate a batch of images.
```py
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import make_image_grid
|
56_4_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-batch-generation
|
.md
|
pipeline = DiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
)
pipeline = pipeline.to("cuda")
```
Define four different `Generator`s and assign each `Generator` a seed (`0` to `3`). Then generate a batch of images and pick one to iterate on.
> [!WARNING]
|
56_4_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-batch-generation
|
.md
|
> [!WARNING]
> Use a list comprehension that iterates over the batch size specified in `range()` to create a unique `Generator` object for each image in the batch. If you multiply the `Generator` by the batch size integer, it only creates *one* `Generator` object that is used sequentially for each image in the batch.
>
> ```py
> [torch.Generator().manual_seed(seed)] * 4
> ```
```python
generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)]
|
56_4_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-batch-generation
|
.md
|
> ```
```python
generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)]
prompt = "Labrador in the style of Vermeer"
images = pipeline(prompt, generator=generator, num_images_per_prompt=4).images[0]
make_image_grid(images, rows=2, cols=2)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/reusabe_seeds.jpg"/>
</div>
|
56_4_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-batch-generation
|
.md
|
<img src="https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/reusabe_seeds.jpg"/>
</div>
Let's improve the first image (you can choose any image you want) which corresponds to the `Generator` with seed `0`. Add some additional text to your prompt and then make sure you reuse the same `Generator` with seed `0`. All the generated images should resemble the first image.
```python
prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]]
|
56_4_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/reusing_seeds.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/reusing_seeds/#deterministic-batch-generation
|
.md
|
```python
prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]]
generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)]
images = pipeline(prompt, generator=generator).images
make_image_grid(images, rows=2, cols=2)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/reusabe_seeds_2.jpg"/>
</div>
|
56_4_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/conditional_image_generation.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/conditional_image_generation/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
57_0_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/conditional_image_generation.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/conditional_image_generation/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
|
57_0_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/conditional_image_generation.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/conditional_image_generation/#text-to-image
|
.md
|
[[open-in-colab]]
When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k") which is also known as a *prompt*.
|
57_1_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/conditional_image_generation.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/conditional_image_generation/#text-to-image
|
.md
|
From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The *denoising* process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image.
<Tip>
|
57_1_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/conditional_image_generation.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/conditional_image_generation/#text-to-image
|
.md
|
<Tip>
Read the [How does Stable Diffusion work?](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) blog post to learn more about how a latent diffusion model works.
</Tip>
You can generate images from a prompt in 🤗 Diffusers in two steps:
1. Load a checkpoint into the [`AutoPipelineForText2Image`] class, which automatically detects the appropriate pipeline class to use based on the checkpoint:
```py
from diffusers import AutoPipelineForText2Image
import torch
|
57_1_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/conditional_image_generation.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/conditional_image_generation/#text-to-image
|
.md
|
pipeline = AutoPipelineForText2Image.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
```
2. Pass a prompt to the pipeline to generate an image:
```py
image = pipeline(
"stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k"
).images[0]
image
```
<div class="flex justify-center">
|
57_1_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/conditional_image_generation.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/conditional_image_generation/#text-to-image
|
.md
|
).images[0]
image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-vader.png"/>
</div>
|
57_1_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/conditional_image_generation.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/conditional_image_generation/#popular-models
|
.md
|
The most common text-to-image models are [Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5), [Stable Diffusion XL (SDXL)](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), and [Kandinsky 2.2](https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder). There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of
|
57_2_0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.