source
stringclasses 273
values | url
stringlengths 47
172
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#general-tasks | .md | device = "cuda"
base_model_id = "diffusers/stable-diffusion-xl-1.0-inpainting-0.1"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"
pipe = AutoPipelineForInpainting.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora() | 53_2_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#general-tasks | .md | pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = load_image(img_url).resize((1024, 1024))
mask_image = load_image(mask_url).resize((1024, 1024))
prompt = "a tiger sitting on a park bench" | 53_2_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#general-tasks | .md | prompt = "a tiger sitting on a park bench"
image = pipe(
prompt=prompt,
image=init_image,
mask_image=mask_image,
num_inference_steps=8,
guidance_scale=0,
eta=0.3,
strength=0.99, # make sure to use `strength` below 1.0
generator=torch.Generator(device=device).manual_seed(0),
).images[0]
grid_image = make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

</hfoption>
</hfoptions> | 53_2_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#community-models | .md | TCD-LoRA also works with many community finetuned models and plugins. For example, load the [animagine-xl-3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0) checkpoint which is a community finetuned version of SDXL for generating anime images.
```python
import torch
from diffusers import StableDiffusionXLPipeline, TCDScheduler
device = "cuda"
base_model_id = "cagliostrolab/animagine-xl-3.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA" | 53_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#community-models | .md | device = "cuda"
base_model_id = "cagliostrolab/animagine-xl-3.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"
pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora() | 53_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#community-models | .md | pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()
prompt = "A man, clad in a meticulously tailored military uniform, stands with unwavering resolve. The uniform boasts intricate details, and his eyes gleam with determination. Strands of vibrant, windswept hair peek out from beneath the brim of his cap." | 53_3_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#community-models | .md | image = pipe(
prompt=prompt,
num_inference_steps=8,
guidance_scale=0,
eta=0.3,
generator=torch.Generator(device=device).manual_seed(0),
).images[0]
```

TCD-LoRA also supports other LoRAs trained on different styles. For example, let's load the [TheLastBen/Papercut_SDXL](https://huggingface.co/TheLastBen/Papercut_SDXL) LoRA and fuse it with the TCD-LoRA with the [`~loaders.UNet2DConditionLoadersMixin.set_adapters`] method. | 53_3_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#community-models | .md | > [!TIP]
> Check out the [Merge LoRAs](merge_loras) guide to learn more about efficient merging methods.
```python
import torch
from diffusers import StableDiffusionXLPipeline
from scheduling_tcd import TCDScheduler | 53_3_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#community-models | .md | device = "cuda"
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"
styled_lora_id = "TheLastBen/Papercut_SDXL"
pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) | 53_3_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#community-models | .md | pipe.load_lora_weights(tcd_lora_id, adapter_name="tcd")
pipe.load_lora_weights(styled_lora_id, adapter_name="style")
pipe.set_adapters(["tcd", "style"], adapter_weights=[1.0, 1.0])
prompt = "papercut of a winter mountain, snow"
image = pipe(
prompt=prompt,
num_inference_steps=4,
guidance_scale=0,
eta=0.3,
generator=torch.Generator(device=device).manual_seed(0),
).images[0]
```
 | 53_3_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#adapters | .md | TCD-LoRA is very versatile, and it can be combined with other adapter types like ControlNets, IP-Adapter, and AnimateDiff.
<hfoptions id="adapters">
<hfoption id="ControlNet"> | 53_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#depth-controlnet | .md | ```python
import torch
import numpy as np
from PIL import Image
from transformers import DPTImageProcessor, DPTForDepthEstimation
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline
from diffusers.utils import load_image, make_image_grid
from scheduling_tcd import TCDScheduler
device = "cuda"
depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to(device)
feature_extractor = DPTImageProcessor.from_pretrained("Intel/dpt-hybrid-midas") | 53_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#depth-controlnet | .md | def get_depth_map(image):
image = feature_extractor(images=image, return_tensors="pt").pixel_values.to(device)
with torch.no_grad(), torch.autocast(device):
depth_map = depth_estimator(image).predicted_depth | 53_5_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#depth-controlnet | .md | depth_map = torch.nn.functional.interpolate(
depth_map.unsqueeze(1),
size=(1024, 1024),
mode="bicubic",
align_corners=False,
)
depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True)
depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True)
depth_map = (depth_map - depth_min) / (depth_max - depth_min)
image = torch.cat([depth_map] * 3, dim=1)
image = image.permute(0, 2, 3, 1).cpu().numpy()[0]
image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8))
return image | 53_5_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#depth-controlnet | .md | base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
controlnet_id = "diffusers/controlnet-depth-sdxl-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"
controlnet = ControlNetModel.from_pretrained(
controlnet_id,
torch_dtype=torch.float16,
variant="fp16",
)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
base_model_id,
controlnet=controlnet,
torch_dtype=torch.float16,
variant="fp16",
)
pipe.enable_model_cpu_offload()
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) | 53_5_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#depth-controlnet | .md | pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()
prompt = "stormtrooper lecture, photorealistic"
image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-depth/resolve/main/images/stormtrooper.png")
depth_image = get_depth_map(image)
controlnet_conditioning_scale = 0.5 # recommended for good generalization | 53_5_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#depth-controlnet | .md | controlnet_conditioning_scale = 0.5 # recommended for good generalization
image = pipe(
prompt,
image=depth_image,
num_inference_steps=4,
guidance_scale=0,
eta=0.3,
controlnet_conditioning_scale=controlnet_conditioning_scale,
generator=torch.Generator(device=device).manual_seed(0),
).images[0]
grid_image = make_image_grid([depth_image, image], rows=1, cols=2)
```
 | 53_5_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | ```python
import torch
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline
from diffusers.utils import load_image, make_image_grid
from scheduling_tcd import TCDScheduler
device = "cuda"
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
controlnet_id = "diffusers/controlnet-canny-sdxl-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA" | 53_6_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | controlnet = ControlNetModel.from_pretrained(
controlnet_id,
torch_dtype=torch.float16,
variant="fp16",
)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
base_model_id,
controlnet=controlnet,
torch_dtype=torch.float16,
variant="fp16",
)
pipe.enable_model_cpu_offload()
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()
prompt = "ultrarealistic shot of a furry blue bird" | 53_6_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()
prompt = "ultrarealistic shot of a furry blue bird"
canny_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png")
controlnet_conditioning_scale = 0.5 # recommended for good generalization | 53_6_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | controlnet_conditioning_scale = 0.5 # recommended for good generalization
image = pipe(
prompt,
image=canny_image,
num_inference_steps=4,
guidance_scale=0,
eta=0.3,
controlnet_conditioning_scale=controlnet_conditioning_scale,
generator=torch.Generator(device=device).manual_seed(0),
).images[0] | 53_6_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | grid_image = make_image_grid([canny_image, image], rows=1, cols=2)
```

<Tip>
The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one.
</Tip>
</hfoption>
<hfoption id="IP-Adapter"> | 53_6_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | </Tip>
</hfoption>
<hfoption id="IP-Adapter">
This example shows how to use the TCD-LoRA with the [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter/tree/main) and SDXL.
```python
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers.utils import load_image, make_image_grid | 53_6_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | from ip_adapter import IPAdapterXL
from scheduling_tcd import TCDScheduler
device = "cuda"
base_model_path = "stabilityai/stable-diffusion-xl-base-1.0"
image_encoder_path = "sdxl_models/image_encoder"
ip_ckpt = "sdxl_models/ip-adapter_sdxl.bin"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"
pipe = StableDiffusionXLPipeline.from_pretrained(
base_model_path,
torch_dtype=torch.float16,
variant="fp16"
)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora() | 53_6_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()
ip_model = IPAdapterXL(pipe, image_encoder_path, ip_ckpt, device)
ref_image = load_image("https://raw.githubusercontent.com/tencent-ailab/IP-Adapter/main/assets/images/woman.png").resize((512, 512))
prompt = "best quality, high quality, wearing sunglasses"
image = ip_model.generate(
pil_image=ref_image,
prompt=prompt,
scale=0.5,
num_samples=1,
num_inference_steps=4,
guidance_scale=0,
eta=0.3,
seed=0,
)[0] | 53_6_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | grid_image = make_image_grid([ref_image, image], rows=1, cols=2)
```

</hfoption>
<hfoption id="AnimateDiff">
[`AnimateDiff`] allows animating images using Stable Diffusion models. TCD-LoRA can substantially accelerate the process without degrading image quality. The quality of animation with TCD-LoRA and AnimateDiff has a more lucid outcome.
```python
import torch | 53_6_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | ```python
import torch
from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
from scheduling_tcd import TCDScheduler
from diffusers.utils import export_to_gif | 53_6_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5")
pipe = AnimateDiffPipeline.from_pretrained(
"frankjoshua/toonyou_beta6",
motion_adapter=adapter,
).to("cuda")
# set TCDScheduler
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
# load TCD LoRA
pipe.load_lora_weights("h1t/TCD-SD15-LoRA", adapter_name="tcd")
pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora") | 53_6_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | pipe.set_adapters(["tcd", "motion-lora"], adapter_weights=[1.0, 1.2]) | 53_6_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md | https://huggingface.co/docs/diffusers/en/using-diffusers/inference_with_tcd_lora/#canny-controlnet | .md | prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress"
generator = torch.manual_seed(0)
frames = pipe(
prompt=prompt,
num_inference_steps=5,
guidance_scale=0,
cross_attention_kwargs={"scale": 1},
num_frames=24,
eta=0.3,
generator=generator
).frames[0]
export_to_gif(frames, "animation.gif")
```

</hfoption>
</hfoptions> | 53_6_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 54_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 54_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#kandinsky | .md | [[open-in-colab]]
The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet. | 54_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#kandinsky | .md | [Kandinsky 2.1](../api/pipelines/kandinsky) changes the architecture to include an image prior model ([`CLIP`](https://huggingface.co/docs/transformers/model_doc/clip)) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a [Modulating Quantized Vectors (MoVQ)](https://huggingface.co/papers/2209.09002) decoder - which adds a spatial | 54_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#kandinsky | .md | 2.1 uses a [Modulating Quantized Vectors (MoVQ)](https://huggingface.co/papers/2209.09002) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images. | 54_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#kandinsky | .md | [Kandinsky 2.2](../api/pipelines/kandinsky_v22) improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes. | 54_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#kandinsky | .md | [Kandinsky 3](../api/pipelines/kandinsky3) simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses [Flan-UL2](https://huggingface.co/google/flan-ul2) to encode text, a UNet with [BigGan-deep](https://hf.co/papers/1809.11096) blocks, and [Sber-MoVQGAN](https://github.com/ai-forever/MoVQGAN) to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a | 54_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#kandinsky | .md | to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet. | 54_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#kandinsky | .md | This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more.
Before you begin, make sure you have the following libraries installed:
```py
# uncomment to install the necessary libraries in Colab
#!pip install -q diffusers transformers accelerate
```
<Tip warning={true}> | 54_1_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#kandinsky | .md | #!pip install -q diffusers transformers accelerate
```
<Tip warning={true}>
Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn't accept `prompt` as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts `image_embeds` during decoding.
<br>
Kandinsky 3 has a more concise architecture and it doesn't require a prior model. This means it's usage is identical to other diffusion models like [Stable Diffusion XL](sdxl).
</Tip> | 54_1_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates `negative_image_embeds` that correspond to the negative prompt `""`. For better results, you can pass an actual `negative_prompt` to the prior pipeline, but this'll increase the effective batch size of the prior pipeline by 2x.
<hfoptions id="text-to-image">
<hfoption id="Kandinsky 2.1">
```py | 54_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | <hfoptions id="text-to-image">
<hfoption id="Kandinsky 2.1">
```py
from diffusers import KandinskyPriorPipeline, KandinskyPipeline
import torch | 54_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16).to("cuda")
pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16).to("cuda") | 54_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better
image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale=1.0).to_tuple()
```
Now pass all the prompts and embeddings to the [`KandinskyPipeline`] to generate an image:
```py | 54_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | ```
Now pass all the prompts and embeddings to the [`KandinskyPipeline`] to generate an image:
```py
image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0]
image
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/cheeseburger.png"/>
</div>
</hfoption> | 54_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | </div>
</hfoption>
<hfoption id="Kandinsky 2.2">
```py
from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline
import torch | 54_2_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16).to("cuda")
pipeline = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16).to("cuda") | 54_2_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better
image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple()
```
Pass the `image_embeds` and `negative_image_embeds` to the [`KandinskyV22Pipeline`] to generate an image:
```py | 54_2_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | ```
Pass the `image_embeds` and `negative_image_embeds` to the [`KandinskyV22Pipeline`] to generate an image:
```py
image = pipeline(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0]
image
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-text-to-image.png"/>
</div>
</hfoption>
<hfoption id="Kandinsky 3"> | 54_2_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | </div>
</hfoption>
<hfoption id="Kandinsky 3">
Kandinsky 3 doesn't require a prior model so you can directly load the [`Kandinsky3Pipeline`] and pass a prompt to generate an image:
```py
from diffusers import Kandinsky3Pipeline
import torch | 54_2_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | pipeline = Kandinsky3Pipeline.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload() | 54_2_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
image = pipeline(prompt).images[0]
image
```
</hfoption>
</hfoptions> | 54_2_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | image = pipeline(prompt).images[0]
image
```
</hfoption>
</hfoptions>
🤗 Diffusers also provides an end-to-end API with the [`KandinskyCombinedPipeline`] and [`KandinskyV22CombinedPipeline`], meaning you don't have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the `prior_guidance_scale` and `prior_num_inference_steps` parameters if you want. | 54_2_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | Use the [`AutoPipelineForText2Image`] to automatically call the combined pipelines under the hood:
<hfoptions id="text-to-image">
<hfoption id="Kandinsky 2.1">
```py
from diffusers import AutoPipelineForText2Image
import torch | 54_2_13 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload()
prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality" | 54_2_14 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0]
image
```
</hfoption>
<hfoption id="Kandinsky 2.2">
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload() | 54_2_15 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#text-to-image | .md | prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality"
image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0]
image
```
</hfoption>
</hfoptions> | 54_2_16 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline:
<hfoptions id="image-to-image">
<hfoption id="Kandinsky 2.1">
```py
import torch
from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline | 54_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
pipeline = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
```
</hfoption>
<hfoption id="Kandinsky 2.2">
```py
import torch
from diffusers import KandinskyV22Img2ImgPipeline, KandinskyPriorPipeline | 54_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
pipeline = KandinskyV22Img2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
```
</hfoption>
<hfoption id="Kandinsky 3">
Kandinsky 3 doesn't require a prior model so you can directly load the image-to-image pipeline:
```py | 54_3_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | Kandinsky 3 doesn't require a prior model so you can directly load the image-to-image pipeline:
```py
from diffusers import Kandinsky3Img2ImgPipeline
from diffusers.utils import load_image
import torch | 54_3_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | pipeline = Kandinsky3Img2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload()
```
</hfoption>
</hfoptions>
Download an image to condition on:
```py
from diffusers.utils import load_image | 54_3_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | # download image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
original_image = load_image(url)
original_image = original_image.resize((768, 512))
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"/>
</div> | 54_3_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | </div>
Generate the `image_embeds` and `negative_image_embeds` with the prior pipeline:
```py
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality" | 54_3_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple()
```
Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image:
<hfoptions id="image-to-image">
<hfoption id="Kandinsky 2.1">
```py
from diffusers.utils import make_image_grid | 54_3_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0]
make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2)
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/img2img_fantasyland.png"/>
</div> | 54_3_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | </div>
</hfoption>
<hfoption id="Kandinsky 2.2">
```py
from diffusers.utils import make_image_grid | 54_3_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | image = pipeline(image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0]
make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2)
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-image-to-image.png"/>
</div>
</hfoption>
<hfoption id="Kandinsky 3">
```py | 54_3_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | </div>
</hfoption>
<hfoption id="Kandinsky 3">
```py
image = pipeline(prompt, negative_prompt=negative_prompt, image=image, strength=0.75, num_inference_steps=25).images[0]
image
```
</hfoption>
</hfoptions> | 54_3_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | image
```
</hfoption>
</hfoptions>
🤗 Diffusers also provides an end-to-end API with the [`KandinskyImg2ImgCombinedPipeline`] and [`KandinskyV22Img2ImgCombinedPipeline`], meaning you don't have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the `prior_guidance_scale` and `prior_num_inference_steps` parameters if you want. | 54_3_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | Use the [`AutoPipelineForImage2Image`] to automatically call the combined pipelines under the hood:
<hfoptions id="image-to-image">
<hfoption id="Kandinsky 2.1">
```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image
import torch | 54_3_13 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True)
pipeline.enable_model_cpu_offload()
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
original_image = load_image(url)
original_image.thumbnail((768, 768)) | 54_3_14 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | original_image.thumbnail((768, 768))
image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0]
make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2)
```
</hfoption>
<hfoption id="Kandinsky 2.2">
```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image
import torch | 54_3_15 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload()
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
original_image = load_image(url)
original_image.thumbnail((768, 768)) | 54_3_16 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#image-to-image | .md | original_image.thumbnail((768, 768))
image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0]
make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2)
```
</hfoption>
</hfoptions> | 54_3_17 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | <Tip warning={true}>
⚠️ The Kandinsky models use ⬜️ **white pixels** to represent the masked area now instead of black pixels. If you are using [`KandinskyInpaintPipeline`] in production, you need to change the mask to use white pixels:
```py
# For PIL input
import PIL.ImageOps
mask = PIL.ImageOps.invert(mask) | 54_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | # For PyTorch and NumPy input
mask = 1 - mask
```
</Tip>
For inpainting, you'll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline:
<hfoptions id="inpaint">
<hfoption id="Kandinsky 2.1">
```py
from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline
from diffusers.utils import load_image, make_image_grid
import torch
import numpy as np
from PIL import Image | 54_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
pipeline = KandinskyInpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
```
</hfoption>
<hfoption id="Kandinsky 2.2">
```py
from diffusers import KandinskyV22InpaintPipeline, KandinskyV22PriorPipeline
from diffusers.utils import load_image, make_image_grid | 54_4_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | from diffusers.utils import load_image, make_image_grid
import torch
import numpy as np
from PIL import Image | 54_4_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
pipeline = KandinskyV22InpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
```
</hfoption>
</hfoptions>
Load an initial image and create a mask:
```py | 54_4_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | ```
</hfoption>
</hfoptions>
Load an initial image and create a mask:
```py
init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png")
mask = np.zeros((768, 768), dtype=np.float32)
# mask area above cat's head
mask[:250, 250:-250] = 1
```
Generate the embeddings with the prior pipeline:
```py
prompt = "a hat"
prior_output = prior_pipeline(prompt)
``` | 54_4_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | ```
Generate the embeddings with the prior pipeline:
```py
prompt = "a hat"
prior_output = prior_pipeline(prompt)
```
Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image:
<hfoptions id="inpaint">
<hfoption id="Kandinsky 2.1">
```py
output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0]
mask = Image.fromarray((mask*255).astype('uint8'), 'L') | 54_4_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | mask = Image.fromarray((mask*255).astype('uint8'), 'L')
make_image_grid([init_image, mask, output_image], rows=1, cols=3)
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/inpaint_cat_hat.png"/>
</div>
</hfoption>
<hfoption id="Kandinsky 2.2">
```py
output_image = pipeline(image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] | 54_4_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | mask = Image.fromarray((mask*255).astype('uint8'), 'L')
make_image_grid([init_image, mask, output_image], rows=1, cols=3)
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinskyv22-inpaint.png"/>
</div>
</hfoption>
</hfoptions> | 54_4_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | </div>
</hfoption>
</hfoptions>
You can also use the end-to-end [`KandinskyInpaintCombinedPipeline`] and [`KandinskyV22InpaintCombinedPipeline`] to call the prior and decoder pipelines together under the hood. Use the [`AutoPipelineForInpainting`] for this:
<hfoptions id="inpaint">
<hfoption id="Kandinsky 2.1">
```py
import torch
import numpy as np
from PIL import Image
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid | 54_4_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png")
mask = np.zeros((768, 768), dtype=np.float32)
# mask area above cat's head
mask[:250, 250:-250] = 1
prompt = "a hat" | 54_4_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[0]
mask = Image.fromarray((mask*255).astype('uint8'), 'L')
make_image_grid([init_image, mask, output_image], rows=1, cols=3)
```
</hfoption>
<hfoption id="Kandinsky 2.2">
```py
import torch
import numpy as np
from PIL import Image
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid | 54_4_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png")
mask = np.zeros((768, 768), dtype=np.float32)
# mask area above cat's head
mask[:250, 250:-250] = 1
prompt = "a hat" | 54_4_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#inpainting | .md | output_image = pipe(prompt=prompt, image=original_image, mask_image=mask).images[0]
mask = Image.fromarray((mask*255).astype('uint8'), 'L')
make_image_grid([init_image, mask, output_image], rows=1, cols=3)
```
</hfoption>
</hfoptions> | 54_4_13 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model's intermediate outputs. Load the prior pipeline and two images you'd like to interpolate:
<hfoptions id="interpolate">
<hfoption id="Kandinsky 2.1">
```py
from diffusers import KandinskyPriorPipeline, KandinskyPipeline
from diffusers.utils import load_image, make_image_grid
import torch | 54_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png")
img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg")
make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2)
``` | 54_5_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2)
```
</hfoption>
<hfoption id="Kandinsky 2.2">
```py
from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline
from diffusers.utils import load_image, make_image_grid
import torch | 54_5_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png")
img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg")
make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2)
``` | 54_5_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2)
```
</hfoption>
</hfoptions>
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">a cat</figcaption>
</div>
<div> | 54_5_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | <figcaption class="mt-2 text-center text-sm text-gray-500">a cat</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">Van Gogh's Starry Night painting</figcaption>
</div>
</div> | 54_5_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | <figcaption class="mt-2 text-center text-sm text-gray-500">Van Gogh's Starry Night painting</figcaption>
</div>
</div>
Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation!
```py
images_texts = ["a cat", img_1, img_2]
weights = [0.3, 0.3, 0.4]
```
Call the `interpolate` function to generate the embeddings, and then pass them to the pipeline to generate the image:
<hfoptions id="interpolate"> | 54_5_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | <hfoptions id="interpolate">
<hfoption id="Kandinsky 2.1">
```py
# prompt can be left empty
prompt = ""
prior_out = prior_pipeline.interpolate(images_texts, weights) | 54_5_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") | 54_5_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | image = pipeline(prompt, **prior_out, height=768, width=768).images[0]
image
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/starry_cat.png"/>
</div>
</hfoption>
<hfoption id="Kandinsky 2.2">
```py
# prompt can be left empty
prompt = ""
prior_out = prior_pipeline.interpolate(images_texts, weights) | 54_5_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/kandinsky.md | https://huggingface.co/docs/diffusers/en/using-diffusers/kandinsky/#interpolation | .md | pipeline = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
image = pipeline(prompt, **prior_out, height=768, width=768).images[0]
image
```
<div class="flex justify-center">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinskyv22-interpolate.png"/>
</div>
</hfoption>
</hfoptions> | 54_5_10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.