source
stringclasses 273
values | url
stringlengths 47
172
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#text-to-image
|
.md
|
For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let's condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline.
Load an image and use the [opencv-python](https://github.com/opencv/opencv-python) library to extract the canny image:
```py
|
64_2_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#text-to-image
|
.md
|
Load an image and use the [opencv-python](https://github.com/opencv/opencv-python) library to extract the canny image:
```py
from diffusers.utils import load_image, make_image_grid
from PIL import Image
import cv2
import numpy as np
|
64_2_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#text-to-image
|
.md
|
original_image = load_image(
"https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
)
image = np.array(original_image)
low_threshold = 100
high_threshold = 200
|
64_2_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#text-to-image
|
.md
|
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
|
64_2_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#text-to-image
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vermeer_canny_edged.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">canny image</figcaption>
</div>
</div>
|
64_2_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#text-to-image
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">canny image</figcaption>
</div>
</div>
Next, load a ControlNet model conditioned on canny edge detection and pass it to the [`StableDiffusionControlNetPipeline`]. Use the faster [`UniPCMultistepScheduler`] and enable model offloading to speed up inference and reduce memory usage.
```py
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
|
64_2_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#text-to-image
|
.md
|
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True
)
|
64_2_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#text-to-image
|
.md
|
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
```
Now pass your prompt and canny image to the pipeline:
```py
output = pipe(
"the mona lisa", image=canny_image
).images[0]
make_image_grid([original_image, canny_image, output], rows=1, cols=3)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-text2img.png"/>
</div>
|
64_2_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#image-to-image
|
.md
|
For image-to-image, you'd typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let's condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information.
|
64_3_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#image-to-image
|
.md
|
You'll use the [`StableDiffusionControlNetImg2ImgPipeline`] for this task, which is different from the [`StableDiffusionControlNetPipeline`] because it allows you to pass an initial image as the starting point for the image generation process.
Load an image and use the `depth-estimation` [`~transformers.Pipeline`] from 🤗 Transformers to extract the depth map of an image:
```py
import torch
import numpy as np
|
64_3_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#image-to-image
|
.md
|
from transformers import pipeline
from diffusers.utils import load_image, make_image_grid
image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg"
)
|
64_3_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#image-to-image
|
.md
|
def get_depth_map(image, depth_estimator):
image = depth_estimator(image)["depth"]
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
detected_map = torch.from_numpy(image).float() / 255.0
depth_map = detected_map.permute(2, 0, 1)
return depth_map
|
64_3_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#image-to-image
|
.md
|
depth_estimator = pipeline("depth-estimation")
depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda")
```
Next, load a ControlNet model conditioned on depth maps and pass it to the [`StableDiffusionControlNetImg2ImgPipeline`]. Use the faster [`UniPCMultistepScheduler`] and enable model offloading to speed up inference and reduce memory usage.
```py
from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
|
64_3_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#image-to-image
|
.md
|
controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True
)
|
64_3_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#image-to-image
|
.md
|
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
```
Now pass your prompt, initial image, and depth map to the pipeline:
```py
output = pipe(
"lego batman and robin", image=image, control_image=depth_map,
).images[0]
make_image_grid([image, output], rows=1, cols=2)
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg"/>
|
64_3_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#image-to-image
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img-2.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
</div>
</div>
|
64_3_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#inpainting
|
.md
|
For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area.
Load an initial image and a mask image:
```py
from diffusers.utils import load_image, make_image_grid
|
64_4_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#inpainting
|
.md
|
init_image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg"
)
init_image = init_image.resize((512, 512))
|
64_4_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#inpainting
|
.md
|
mask_image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg"
)
mask_image = mask_image.resize((512, 512))
make_image_grid([init_image, mask_image], rows=1, cols=2)
```
Create a function to prepare the control image from the initial and mask images. This'll create a tensor to mark the pixels in `init_image` as masked if the corresponding pixel in `mask_image` is over a certain threshold.
```py
import numpy as np
|
64_4_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#inpainting
|
.md
|
```py
import numpy as np
import torch
|
64_4_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#inpainting
|
.md
|
def make_inpaint_condition(image, image_mask):
image = np.array(image.convert("RGB")).astype(np.float32) / 255.0
image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0
assert image.shape[0:1] == image_mask.shape[0:1]
image[image_mask > 0.5] = -1.0 # set as masked pixel
image = np.expand_dims(image, 0).transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return image
|
64_4_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#inpainting
|
.md
|
control_image = make_inpaint_condition(init_image, mask_image)
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg"/>
|
64_4_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#inpainting
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">mask image</figcaption>
</div>
</div>
Load a ControlNet model conditioned on inpainting and pass it to the [`StableDiffusionControlNetInpaintPipeline`]. Use the faster [`UniPCMultistepScheduler`] and enable model offloading to speed up inference and reduce memory usage.
```py
from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler
|
64_4_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#inpainting
|
.md
|
controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True
)
|
64_4_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#inpainting
|
.md
|
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
```
Now pass your prompt, initial image, mask image, and control image to the pipeline:
```py
output = pipe(
"corgi face with large ears, detailed, pixar, animated, disney",
num_inference_steps=20,
eta=1.0,
image=init_image,
mask_image=mask_image,
control_image=control_image,
).images[0]
make_image_grid([init_image, mask_image, output], rows=1, cols=3)
```
<div class="flex justify-center">
|
64_4_8
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#inpainting
|
.md
|
).images[0]
make_image_grid([init_image, mask_image, output], rows=1, cols=3)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-result.png"/>
</div>
|
64_4_9
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#guess-mode
|
.md
|
[Guess mode](https://github.com/lllyasviel/ControlNet/discussions/188) does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do its best to "guess" the contents of the input control map (depth map, pose estimation, canny edge, etc.).
|
64_5_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#guess-mode
|
.md
|
Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest `DownBlock` corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the `MidBlock` output becomes 1.0.
<Tip>
Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want.
</Tip>
|
64_5_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#guess-mode
|
.md
|
<Tip>
Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want.
</Tip>
Set `guess_mode=True` in the pipeline, and it is [recommended](https://github.com/lllyasviel/ControlNet#guess-mode--non-prompt-mode) to set the `guidance_scale` value between 3.0 and 5.0.
```py
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
from diffusers.utils import load_image, make_image_grid
import numpy as np
import torch
from PIL import Image
|
64_5_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#guess-mode
|
.md
|
from diffusers.utils import load_image, make_image_grid
import numpy as np
import torch
from PIL import Image
import cv2
|
64_5_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#guess-mode
|
.md
|
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", use_safetensors=True)
pipe = StableDiffusionControlNetPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, use_safetensors=True).to("cuda")
original_image = load_image("https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png")
image = np.array(original_image)
low_threshold = 100
high_threshold = 200
|
64_5_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#guess-mode
|
.md
|
image = np.array(original_image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
|
64_5_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#guess-mode
|
.md
|
image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0]
make_image_grid([original_image, canny_image, image], rows=1, cols=3)
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare_guess_mode/output_images/diffusers/output_bird_canny_0.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">regular mode with prompt</figcaption>
</div>
<div>
|
64_5_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#guess-mode
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">regular mode with prompt</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare_guess_mode/output_images/diffusers/output_bird_canny_0_gm.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">guess mode without prompt</figcaption>
</div>
</div>
|
64_5_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
There aren't too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we've trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We're also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the [🤗 Diffusers Hub organization](https://huggingface.co/diffusers)!
|
64_6_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
Let's use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image:
```py
from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL
from diffusers.utils import load_image, make_image_grid
from PIL import Image
import cv2
import numpy as np
import torch
|
64_6_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
original_image = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
)
image = np.array(original_image)
low_threshold = 100
high_threshold = 200
|
64_6_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
make_image_grid([original_image, canny_image], rows=1, cols=2)
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
|
64_6_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/hf-logo-canny.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">canny image</figcaption>
</div>
</div>
|
64_6_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">canny image</figcaption>
</div>
</div>
Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the [`StableDiffusionXLControlNetPipeline`]. You can also enable model offloading to reduce memory usage.
```py
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0",
torch_dtype=torch.float16,
use_safetensors=True
)
|
64_6_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
"diffusers/controlnet-canny-sdxl-1.0",
torch_dtype=torch.float16,
use_safetensors=True
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
torch_dtype=torch.float16,
use_safetensors=True
)
pipe.enable_model_cpu_offload()
```
|
64_6_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
controlnet=controlnet,
vae=vae,
torch_dtype=torch.float16,
use_safetensors=True
)
pipe.enable_model_cpu_offload()
```
Now pass your prompt (and optionally a negative prompt if you're using one) and canny image to the pipeline:
<Tip>
|
64_6_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
```
Now pass your prompt (and optionally a negative prompt if you're using one) and canny image to the pipeline:
<Tip>
The [`controlnet_conditioning_scale`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetPipeline.__call__.controlnet_conditioning_scale) parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number!
</Tip>
```py
|
64_6_8
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
</Tip>
```py
prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
negative_prompt = 'low quality, bad quality, sketches'
|
64_6_9
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
image = pipe(
prompt,
negative_prompt=negative_prompt,
image=canny_image,
controlnet_conditioning_scale=0.5,
).images[0]
make_image_grid([original_image, canny_image, image], rows=1, cols=3)
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/diffusers/controlnet-canny-sdxl-1.0/resolve/main/out_hug_lab_7.png"/>
</div>
You can use [`StableDiffusionXLControlNetPipeline`] in guess mode as well by setting the parameter to `True`:
```py
|
64_6_10
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
</div>
You can use [`StableDiffusionXLControlNetPipeline`] in guess mode as well by setting the parameter to `True`:
```py
from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL
from diffusers.utils import load_image, make_image_grid
import numpy as np
import torch
import cv2
from PIL import Image
|
64_6_11
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
negative_prompt = "low quality, bad quality, sketches"
original_image = load_image(
"https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
)
|
64_6_12
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors=True
)
pipe.enable_model_cpu_offload()
|
64_6_13
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
image = np.array(original_image)
image = cv2.Canny(image, 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
|
64_6_14
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
image = pipe(
prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=0.5, image=canny_image, guess_mode=True,
).images[0]
make_image_grid([original_image, canny_image, image], rows=1, cols=3)
```
<Tip>
You can use a refiner model with `StableDiffusionXLControlNetPipeline` to improve image quality, just like you can with a regular `StableDiffusionXLPipeline`.
See the [Refine image quality](./sdxl#refine-image-quality) section to learn how to use the refiner model.
|
64_6_15
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#controlnet-with-stable-diffusion-xl
|
.md
|
See the [Refine image quality](./sdxl#refine-image-quality) section to learn how to use the refiner model.
Make sure to use `StableDiffusionXLControlNetPipeline` and pass `image` and `controlnet_conditioning_scale`.
```py
base = StableDiffusionXLControlNetPipeline(...)
image = base(
prompt=prompt,
controlnet_conditioning_scale=0.5,
image=canny_image,
num_inference_steps=40,
denoising_end=0.8,
output_type="latent",
).images
# rest exactly as with StableDiffusionXLPipeline
```
</Tip>
|
64_6_16
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
<Tip>
Replace the SDXL model with a model like [stable-diffusion-v1-5/stable-diffusion-v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) to use multiple conditioning inputs with Stable Diffusion models.
</Tip>
You can compose multiple ControlNet conditionings from different image inputs to create a *MultiControlNet*. To get better results, it is often helpful to:
|
64_7_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
1. mask conditionings such that they don't overlap (for example, mask the area of a canny image where the pose conditioning is located)
2. experiment with the [`controlnet_conditioning_scale`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetPipeline.__call__.controlnet_conditioning_scale) parameter to determine how much weight to assign to each conditioning input
|
64_7_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
In this example, you'll combine a canny image and a human pose estimation image to generate a new image.
Prepare the canny image conditioning:
```py
from diffusers.utils import load_image, make_image_grid
from PIL import Image
import numpy as np
import cv2
|
64_7_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
original_image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"
)
image = np.array(original_image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
# zero out middle columns of image where pose will be overlaid
zero_start = image.shape[1] // 4
zero_end = zero_start + image.shape[1] // 2
image[:, zero_start:zero_end] = 0
|
64_7_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
make_image_grid([original_image, canny_image], rows=1, cols=2)
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
|
64_7_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/landscape_canny_masked.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">canny image</figcaption>
</div>
</div>
For human pose estimation, install [controlnet_aux](https://github.com/patrickvonplaten/controlnet_aux):
```py
|
64_7_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
</div>
For human pose estimation, install [controlnet_aux](https://github.com/patrickvonplaten/controlnet_aux):
```py
# uncomment to install the necessary library in Colab
#!pip install -q controlnet-aux
```
Prepare the human pose estimation conditioning:
```py
from controlnet_aux import OpenposeDetector
|
64_7_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
original_image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png"
)
openpose_image = openpose(original_image)
make_image_grid([original_image, openpose_image], rows=1, cols=2)
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png"/>
|
64_7_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/person_pose.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">human pose image</figcaption>
</div>
</div>
|
64_7_8
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">human pose image</figcaption>
</div>
</div>
Load a list of ControlNet models that correspond to each conditioning, and pass them to the [`StableDiffusionXLControlNetPipeline`]. Use the faster [`UniPCMultistepScheduler`] and enable model offloading to reduce memory usage.
```py
from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler
import torch
|
64_7_9
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
controlnets = [
ControlNetModel.from_pretrained(
"thibaud/controlnet-openpose-sdxl-1.0", torch_dtype=torch.float16
),
ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True
),
]
|
64_7_10
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors=True
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
```
|
64_7_11
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
```
Now you can pass your prompt (an optional negative prompt if you're using one), canny image, and pose image to the pipeline:
```py
prompt = "a giant standing in a fantasy landscape, best quality"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality"
|
64_7_12
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
generator = torch.manual_seed(1)
images = [openpose_image.resize((1024, 1024)), canny_image.resize((1024, 1024))]
|
64_7_13
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
images = pipe(
prompt,
image=images,
num_inference_steps=25,
generator=generator,
negative_prompt=negative_prompt,
num_images_per_prompt=3,
controlnet_conditioning_scale=[1.0, 0.8],
).images
make_image_grid([original_image, canny_image, openpose_image,
images[0].resize((512, 512)), images[1].resize((512, 512)), images[2].resize((512, 512))], rows=2, cols=3)
```
<div class="flex justify-center">
|
64_7_14
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/controlnet.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/controlnet/#multicontrolnet
|
.md
|
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/multicontrolnet.png"/>
</div>
|
64_7_15
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
65_0_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
|
65_0_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
[[open-in-colab]]
Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps:
|
65_1_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
1. the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text
2. the input image is encoded into latent space with DDIM
|
65_1_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
2. the input image is encoded into latent space with DDIM
3. the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image
This guide will show you how to use DiffEdit to edit images without manually creating a mask.
Before you begin, make sure you have the following libraries installed:
```py
# uncomment to install the necessary libraries in Colab
|
65_1_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
```py
# uncomment to install the necessary libraries in Colab
#!pip install -q diffusers transformers accelerate
```
|
65_1_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
#!pip install -q diffusers transformers accelerate
```
The [`StableDiffusionDiffEditPipeline`] requires an image mask and a set of partially inverted latents. The image mask is generated from the [`~StableDiffusionDiffEditPipeline.generate_mask`] function, and includes two parameters, `source_prompt` and `target_prompt`. These parameters determine what to edit in the image. For example, if you want to change a bowl of *fruits* to a bowl of *pears*, then:
```py
source_prompt = "a bowl of fruits"
|
65_1_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
```py
source_prompt = "a bowl of fruits"
target_prompt = "a bowl of pears"
```
The partially inverted latents are generated from the [`~StableDiffusionDiffEditPipeline.invert`] function, and it is generally a good idea to include a `prompt` or *caption* describing the image to help guide the inverse latent sampling process. The caption can often be your `source_prompt`, but feel free to experiment with other text descriptions!
|
65_1_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
Let's load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage:
```py
import torch
from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline
|
65_1_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1",
torch_dtype=torch.float16,
safety_checker=None,
use_safetensors=True,
)
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
pipeline.enable_model_cpu_offload()
pipeline.enable_vae_slicing()
```
Load the image to edit:
```py
from diffusers.utils import load_image, make_image_grid
|
65_1_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
raw_image = load_image(img_url).resize((768, 768))
raw_image
```
Use the [`~StableDiffusionDiffEditPipeline.generate_mask`] function to generate the image mask. You'll need to pass it the `source_prompt` and `target_prompt` to specify what to edit in the image:
```py
from PIL import Image
|
65_1_8
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
source_prompt = "a bowl of fruits"
target_prompt = "a basket of pears"
mask_image = pipeline.generate_mask(
image=raw_image,
source_prompt=source_prompt,
target_prompt=target_prompt,
)
Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768))
```
Next, create the inverted latents and pass it a caption describing the image:
```py
inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents
```
|
65_1_9
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
```py
inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents
```
Finally, pass the image mask and inverted latents to the pipeline. The `target_prompt` becomes the `prompt` now, and the `source_prompt` is used as the `negative_prompt`:
```py
output_image = pipeline(
prompt=target_prompt,
mask_image=mask_image,
image_latents=inv_latents,
negative_prompt=source_prompt,
).images[0]
mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768))
|
65_1_10
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
).images[0]
mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768))
make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3)
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
|
65_1_11
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#diffedit
|
.md
|
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://github.com/Xiang-cd/DiffEdit-stable-diffusion/blob/main/assets/target.png?raw=true"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">edited image</figcaption>
</div>
</div>
|
65_1_12
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
The source and target embeddings can be automatically generated with the [Flan-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5) model instead of creating them manually.
Load the Flan-T5 model and tokenizer from the 🤗 Transformers library:
```py
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
|
65_2_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16)
```
Provide some initial text to prompt the model to generate the source and target prompts.
```py
source_concept = "bowl"
target_concept = "basket"
source_text = f"Provide a caption for images containing a {source_concept}. "
"The captions should be in English and should be no longer than 150 characters."
|
65_2_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
target_text = f"Provide a caption for images containing a {target_concept}. "
"The captions should be in English and should be no longer than 150 characters."
```
Next, create a utility function to generate the prompts:
```py
@torch.no_grad()
def generate_prompts(input_prompt):
input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda")
|
65_2_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
outputs = model.generate(
input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10
)
return tokenizer.batch_decode(outputs, skip_special_tokens=True)
|
65_2_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
source_prompts = generate_prompts(source_text)
target_prompts = generate_prompts(target_text)
print(source_prompts)
print(target_prompts)
```
<Tip>
Check out the [generation strategy](https://huggingface.co/docs/transformers/main/en/generation_strategies) guide if you're interested in learning more about strategies for generating different quality text.
</Tip>
|
65_2_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
</Tip>
Load the text encoder model used by the [`StableDiffusionDiffEditPipeline`] to encode the text. You'll use the text encoder to compute the text embeddings:
```py
import torch
from diffusers import StableDiffusionDiffEditPipeline
|
65_2_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True
)
pipeline.enable_model_cpu_offload()
pipeline.enable_vae_slicing()
|
65_2_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
@torch.no_grad()
def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"):
embeddings = []
for sent in sentences:
text_inputs = tokenizer(
sent,
padding="max_length",
max_length=tokenizer.model_max_length,
truncation=True,
return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0]
embeddings.append(prompt_embeds)
return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0)
|
65_2_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder)
target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder)
```
Finally, pass the embeddings to the [`~StableDiffusionDiffEditPipeline.generate_mask`] and [`~StableDiffusionDiffEditPipeline.invert`] functions, and pipeline to generate the image:
```diff
from diffusers import DDIMInverseScheduler, DDIMScheduler
from diffusers.utils import load_image, make_image_grid
|
65_2_8
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
```diff
from diffusers import DDIMInverseScheduler, DDIMScheduler
from diffusers.utils import load_image, make_image_grid
from PIL import Image
|
65_2_9
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
raw_image = load_image(img_url).resize((768, 768))
|
65_2_10
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
mask_image = pipeline.generate_mask(
image=raw_image,
- source_prompt=source_prompt,
- target_prompt=target_prompt,
+ source_prompt_embeds=source_embeds,
+ target_prompt_embeds=target_embeds,
)
inv_latents = pipeline.invert(
- prompt=source_prompt,
+ prompt_embeds=source_embeds,
image=raw_image,
).latents
|
65_2_11
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-source-and-target-embeddings
|
.md
|
inv_latents = pipeline.invert(
- prompt=source_prompt,
+ prompt_embeds=source_embeds,
image=raw_image,
).latents
output_image = pipeline(
mask_image=mask_image,
image_latents=inv_latents,
- prompt=target_prompt,
- negative_prompt=source_prompt,
+ prompt_embeds=target_embeds,
+ negative_prompt_embeds=source_embeds,
).images[0]
mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L")
make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3)
```
|
65_2_12
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-a-caption-for-inversion
|
.md
|
While you can use the `source_prompt` as a caption to help generate the partially inverted latents, you can also use the [BLIP](https://huggingface.co/docs/transformers/model_doc/blip) model to automatically generate a caption.
Load the BLIP model and processor from the 🤗 Transformers library:
```py
import torch
from transformers import BlipForConditionalGeneration, BlipProcessor
|
65_3_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-a-caption-for-inversion
|
.md
|
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True)
```
Create a utility function to generate a caption from the input image:
```py
@torch.no_grad()
def generate_caption(images, caption_generator, caption_processor):
text = "a photograph of"
|
65_3_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-a-caption-for-inversion
|
.md
|
inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype)
caption_generator.to("cuda")
outputs = caption_generator.generate(**inputs, max_new_tokens=128)
# offload caption generator
caption_generator.to("cpu")
caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0]
return caption
```
Load an input image and generate a caption for it using the `generate_caption` function:
```py
from diffusers.utils import load_image
|
65_3_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-a-caption-for-inversion
|
.md
|
img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
raw_image = load_image(img_url).resize((768, 768))
caption = generate_caption(raw_image, model, processor)
```
<div class="flex justify-center">
<figure>
<img class="rounded-xl" src="https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"/>
<figcaption class="text-center">generated caption: "a photograph of a bowl of fruit on a table"</figcaption>
</figure>
</div>
|
65_3_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/diffedit.md
|
https://huggingface.co/docs/diffusers/en/using-diffusers/diffedit/#generate-a-caption-for-inversion
|
.md
|
<figcaption class="text-center">generated caption: "a photograph of a bowl of fruit on a table"</figcaption>
</figure>
</div>
Now you can drop the caption into the [`~StableDiffusionDiffEditPipeline.invert`] function to generate the partially inverted latents!
|
65_3_4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.