source
stringclasses 273
values | url
stringlengths 47
172
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#surface-normals-prediction-quick-start | .md | vis = pipe.image_processor.visualize_normals(normals.prediction)
vis[0].save("einstein_normals.png")
```
The visualization function for normals [`~pipelines.marigold.marigold_image_processing.MarigoldImageProcessor.visualize_normals`] maps the three-dimensional prediction with pixel values in the range `[-1, 1]` into an RGB image.
The visualization function supports flipping surface normals axes to make the visualization compatible with other choices of the frame of reference. | 48_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#surface-normals-prediction-quick-start | .md | Conceptually, each pixel is painted according to the surface normal vector in the frame of reference, where `X` axis points right, `Y` axis points up, and `Z` axis points at the viewer.
Below is the visualized prediction:
<div class="flex gap-4" style="justify-content: center; width: 100%;">
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_lcm_normals.png"/> | 48_3_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#surface-normals-prediction-quick-start | .md | <figcaption class="mt-1 text-center text-sm text-gray-500">
Predicted surface normals visualization
</figcaption>
</div>
</div>
In this example, the nose tip almost certainly has a point on the surface, in which the surface normal vector points straight at the viewer, meaning that its coordinates are `[0, 0, 1]`.
This vector maps to the RGB `[128, 128, 255]`, which corresponds to the violet-blue color. | 48_3_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#surface-normals-prediction-quick-start | .md | This vector maps to the RGB `[128, 128, 255]`, which corresponds to the violet-blue color.
Similarly, a surface normal on the cheek in the right part of the image has a large `X` component, which increases the red hue.
Points on the shoulders pointing up with a large `Y` promote green color. | 48_3_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#speeding-up-inference | .md | The above quick start snippets are already optimized for speed: they load the LCM checkpoint, use the `fp16` variant of weights and computation, and perform just one denoising diffusion step.
The `pipe(image)` call completes in 280ms on RTX 3090 GPU.
Internally, the input image is encoded with the Stable Diffusion VAE encoder, then the U-Net performs one denoising step, and finally, the prediction latent is decoded with the VAE decoder into pixel space. | 48_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#speeding-up-inference | .md | In this case, two out of three module calls are dedicated to converting between pixel and latent space of LDM.
Because Marigold's latent space is compatible with the base Stable Diffusion, it is possible to speed up the pipeline call by more than 3x (85ms on RTX 3090) by using a [lightweight replacement of the SD VAE](../api/models/autoencoder_tiny):
```diff
import diffusers
import torch | 48_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#speeding-up-inference | .md | pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
"prs-eth/marigold-depth-lcm-v1-0", variant="fp16", torch_dtype=torch.float16
).to("cuda")
+ pipe.vae = diffusers.AutoencoderTiny.from_pretrained(
+ "madebyollin/taesd", torch_dtype=torch.float16
+ ).cuda() | 48_4_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#speeding-up-inference | .md | + pipe.vae = diffusers.AutoencoderTiny.from_pretrained(
+ "madebyollin/taesd", torch_dtype=torch.float16
+ ).cuda()
image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")
depth = pipe(image)
```
As suggested in [Optimizations](../optimization/torch2.0#torch.compile), adding `torch.compile` may squeeze extra performance depending on the target hardware:
```diff
import diffusers
import torch | 48_4_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#speeding-up-inference | .md | pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
"prs-eth/marigold-depth-lcm-v1-0", variant="fp16", torch_dtype=torch.float16
).to("cuda")
+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")
depth = pipe(image)
``` | 48_4_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#qualitative-comparison-with-depth-anything | .md | With the above speed optimizations, Marigold delivers predictions with more details and faster than [Depth Anything](https://huggingface.co/docs/transformers/main/en/model_doc/depth_anything) with the largest checkpoint [LiheYoung/depth-anything-large-hf](https://huggingface.co/LiheYoung/depth-anything-large-hf):
<div class="flex gap-4">
<div style="flex: 1 1 50%; max-width: 50%;"> | 48_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#qualitative-comparison-with-depth-anything | .md | <div class="flex gap-4">
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_lcm_depth.png"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">
Marigold LCM fp16 with Tiny AutoEncoder
</figcaption>
</div>
<div style="flex: 1 1 50%; max-width: 50%;"> | 48_5_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#qualitative-comparison-with-depth-anything | .md | Marigold LCM fp16 with Tiny AutoEncoder
</figcaption>
</div>
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/einstein_depthanything_large.png"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">
Depth Anything Large
</figcaption>
</div>
</div> | 48_5_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#maximizing-precision-and-ensembling | .md | Marigold pipelines have a built-in ensembling mechanism combining multiple predictions from different random latents.
This is a brute-force way of improving the precision of predictions, capitalizing on the generative nature of diffusion.
The ensembling path is activated automatically when the `ensemble_size` argument is set greater than `1`.
When aiming for maximum precision, it makes sense to adjust `num_inference_steps` simultaneously with `ensemble_size`. | 48_6_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#maximizing-precision-and-ensembling | .md | When aiming for maximum precision, it makes sense to adjust `num_inference_steps` simultaneously with `ensemble_size`.
The recommended values vary across checkpoints but primarily depend on the scheduler type.
The effect of ensembling is particularly well-seen with surface normals:
```python
import diffusers | 48_6_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#maximizing-precision-and-ensembling | .md | model_path = "prs-eth/marigold-normals-v1-0"
model_paper_kwargs = {
diffusers.schedulers.DDIMScheduler: {
"num_inference_steps": 10,
"ensemble_size": 10,
},
diffusers.schedulers.LCMScheduler: {
"num_inference_steps": 4,
"ensemble_size": 5,
},
}
image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")
pipe = diffusers.MarigoldNormalsPipeline.from_pretrained(model_path).to("cuda")
pipe_kwargs = model_paper_kwargs[type(pipe.scheduler)] | 48_6_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#maximizing-precision-and-ensembling | .md | depth = pipe(image, **pipe_kwargs) | 48_6_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#maximizing-precision-and-ensembling | .md | vis = pipe.image_processor.visualize_normals(depth.prediction)
vis[0].save("einstein_normals.png")
```
<div class="flex gap-4">
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_lcm_normals.png"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">
Surface normals, no ensembling
</figcaption>
</div>
<div style="flex: 1 1 50%; max-width: 50%;"> | 48_6_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#maximizing-precision-and-ensembling | .md | Surface normals, no ensembling
</figcaption>
</div>
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_normals.png"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">
Surface normals, with ensembling
</figcaption>
</div>
</div>
As can be seen, all areas with fine-grained structurers, such as hair, got more conservative and on average more correct predictions. | 48_6_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#maximizing-precision-and-ensembling | .md | Such a result is more suitable for precision-sensitive downstream tasks, such as 3D reconstruction. | 48_6_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#quantitative-evaluation | .md | To evaluate Marigold quantitatively in standard leaderboards and benchmarks (such as NYU, KITTI, and other datasets), follow the evaluation protocol outlined in the paper: load the full precision fp32 model and use appropriate values for `num_inference_steps` and `ensemble_size`.
Optionally seed randomness to ensure reproducibility. Maximizing `batch_size` will deliver maximum device utilization.
```python
import diffusers
import torch
device = "cuda"
seed = 2024
model_path = "prs-eth/marigold-v1-0" | 48_7_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#quantitative-evaluation | .md | device = "cuda"
seed = 2024
model_path = "prs-eth/marigold-v1-0"
model_paper_kwargs = {
diffusers.schedulers.DDIMScheduler: {
"num_inference_steps": 50,
"ensemble_size": 10,
},
diffusers.schedulers.LCMScheduler: {
"num_inference_steps": 4,
"ensemble_size": 10,
},
}
image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg") | 48_7_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#quantitative-evaluation | .md | image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")
generator = torch.Generator(device=device).manual_seed(seed)
pipe = diffusers.MarigoldDepthPipeline.from_pretrained(model_path).to(device)
pipe_kwargs = model_paper_kwargs[type(pipe.scheduler)]
depth = pipe(image, generator=generator, **pipe_kwargs)
# evaluate metrics
``` | 48_7_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#using-predictive-uncertainty | .md | The ensembling mechanism built into Marigold pipelines combines multiple predictions obtained from different random latents.
As a side effect, it can be used to quantify epistemic (model) uncertainty; simply specify `ensemble_size` greater than 1 and set `output_uncertainty=True`.
The resulting uncertainty will be available in the `uncertainty` field of the output.
It can be visualized as follows:
```python
import diffusers
import torch | 48_8_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#using-predictive-uncertainty | .md | pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
"prs-eth/marigold-depth-lcm-v1-0", variant="fp16", torch_dtype=torch.float16
).to("cuda")
image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")
depth = pipe(
image,
ensemble_size=10, # any number greater than 1; higher values yield higher precision
output_uncertainty=True,
) | 48_8_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#using-predictive-uncertainty | .md | uncertainty = pipe.image_processor.visualize_uncertainty(depth.uncertainty)
uncertainty[0].save("einstein_depth_uncertainty.png")
```
<div class="flex gap-4">
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_depth_uncertainty.png"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">
Depth uncertainty
</figcaption>
</div>
<div style="flex: 1 1 50%; max-width: 50%;"> | 48_8_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#using-predictive-uncertainty | .md | Depth uncertainty
</figcaption>
</div>
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_normals_uncertainty.png"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">
Surface normals uncertainty
</figcaption>
</div>
</div>
The interpretation of uncertainty is easy: higher values (white) correspond to pixels, where the model struggles to make consistent predictions. | 48_8_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#using-predictive-uncertainty | .md | Evidently, the depth model is the least confident around edges with discontinuity, where the object depth changes drastically.
The surface normals model is the least confident in fine-grained structures, such as hair, and dark areas, such as the collar. | 48_8_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#frame-by-frame-video-processing-with-temporal-consistency | .md | Due to Marigold's generative nature, each prediction is unique and defined by the random noise sampled for the latent initialization.
This becomes an obvious drawback compared to traditional end-to-end dense regression networks, as exemplified in the following videos:
<div class="flex gap-4">
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_obama.gif"/> | 48_9_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#frame-by-frame-video-processing-with-temporal-consistency | .md | <figcaption class="mt-1 text-center text-sm text-gray-500">Input video</figcaption>
</div>
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_obama_depth_independent.gif"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">Marigold Depth applied to input video frames independently</figcaption>
</div>
</div> | 48_9_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#frame-by-frame-video-processing-with-temporal-consistency | .md | </div>
</div>
To address this issue, it is possible to pass `latents` argument to the pipelines, which defines the starting point of diffusion.
Empirically, we found that a convex combination of the very same starting point noise latent and the latent corresponding to the previous frame prediction give sufficiently smooth results, as implemented in the snippet below:
```python
import imageio
from PIL import Image
from tqdm import tqdm
import diffusers
import torch | 48_9_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#frame-by-frame-video-processing-with-temporal-consistency | .md | device = "cuda"
path_in = "obama.mp4"
path_out = "obama_depth.gif"
pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
"prs-eth/marigold-depth-lcm-v1-0", variant="fp16", torch_dtype=torch.float16
).to(device)
pipe.vae = diffusers.AutoencoderTiny.from_pretrained(
"madebyollin/taesd", torch_dtype=torch.float16
).to(device)
pipe.set_progress_bar_config(disable=True) | 48_9_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#frame-by-frame-video-processing-with-temporal-consistency | .md | with imageio.get_reader(path_in) as reader:
size = reader.get_meta_data()['size']
last_frame_latent = None
latent_common = torch.randn(
(1, 4, 768 * size[1] // (8 * max(size)), 768 * size[0] // (8 * max(size)))
).to(device=device, dtype=torch.float16)
out = []
for frame_id, frame in tqdm(enumerate(reader), desc="Processing Video"):
frame = Image.fromarray(frame)
latents = latent_common
if last_frame_latent is not None:
latents = 0.9 * latents + 0.1 * last_frame_latent | 48_9_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#frame-by-frame-video-processing-with-temporal-consistency | .md | depth = pipe(
frame, match_input_resolution=False, latents=latents, output_latent=True
)
last_frame_latent = depth.latent
out.append(pipe.image_processor.visualize_depth(depth.prediction)[0]) | 48_9_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#frame-by-frame-video-processing-with-temporal-consistency | .md | diffusers.utils.export_to_gif(out, path_out, fps=reader.get_meta_data()['fps'])
```
Here, the diffusion process starts from the given computed latent.
The pipeline sets `output_latent=True` to access `out.latent` and computes its contribution to the next frame's latent initialization.
The result is much more stable now:
<div class="flex gap-4">
<div style="flex: 1 1 50%; max-width: 50%;"> | 48_9_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#frame-by-frame-video-processing-with-temporal-consistency | .md | The result is much more stable now:
<div class="flex gap-4">
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_obama_depth_independent.gif"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">Marigold Depth applied to input video frames independently</figcaption>
</div>
<div style="flex: 1 1 50%; max-width: 50%;"> | 48_9_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#frame-by-frame-video-processing-with-temporal-consistency | .md | </div>
<div style="flex: 1 1 50%; max-width: 50%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_obama_depth_consistent.gif"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">Marigold Depth with forced latents initialization</figcaption>
</div>
</div> | 48_9_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-for-controlnet | .md | A very common application for depth prediction with diffusion models comes in conjunction with ControlNet.
Depth crispness plays a crucial role in obtaining high-quality results from ControlNet.
As seen in comparisons with other methods above, Marigold excels at that task.
The snippet below demonstrates how to load an image, compute depth, and pass it into ControlNet in a compatible format:
```python
import torch
import diffusers | 48_10_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-for-controlnet | .md | device = "cuda"
generator = torch.Generator(device=device).manual_seed(2024)
image = diffusers.utils.load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_depth_source.png"
)
pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
"prs-eth/marigold-depth-lcm-v1-0", torch_dtype=torch.float16, variant="fp16"
).to(device) | 48_10_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-for-controlnet | .md | depth_image = pipe(image, generator=generator).prediction
depth_image = pipe.image_processor.visualize_depth(depth_image, color_map="binary")
depth_image[0].save("motorcycle_controlnet_depth.png") | 48_10_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-for-controlnet | .md | controlnet = diffusers.ControlNetModel.from_pretrained(
"diffusers/controlnet-depth-sdxl-1.0", torch_dtype=torch.float16, variant="fp16"
).to(device)
pipe = diffusers.StableDiffusionXLControlNetPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0", torch_dtype=torch.float16, variant="fp16", controlnet=controlnet
).to(device)
pipe.scheduler = diffusers.DPMSolverMultistepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True) | 48_10_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-for-controlnet | .md | controlnet_out = pipe(
prompt="high quality photo of a sports bike, city",
negative_prompt="",
guidance_scale=6.5,
num_inference_steps=25,
image=depth_image,
controlnet_conditioning_scale=0.7,
control_guidance_end=0.7,
generator=generator,
).images
controlnet_out[0].save("motorcycle_controlnet_out.png")
```
<div class="flex gap-4">
<div style="flex: 1 1 33%; max-width: 33%;"> | 48_10_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-for-controlnet | .md | ```
<div class="flex gap-4">
<div style="flex: 1 1 33%; max-width: 33%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_depth_source.png"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">
Input image
</figcaption>
</div>
<div style="flex: 1 1 33%; max-width: 33%;"> | 48_10_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-for-controlnet | .md | Input image
</figcaption>
</div>
<div style="flex: 1 1 33%; max-width: 33%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/motorcycle_controlnet_depth.png"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">
Depth in the format compatible with ControlNet
</figcaption>
</div>
<div style="flex: 1 1 33%; max-width: 33%;"> | 48_10_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-for-controlnet | .md | Depth in the format compatible with ControlNet
</figcaption>
</div>
<div style="flex: 1 1 33%; max-width: 33%;">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/motorcycle_controlnet_out.png"/>
<figcaption class="mt-1 text-center text-sm text-gray-500">
ControlNet generation, conditioned on depth and prompt: "high quality photo of a sports bike, city"
</figcaption>
</div>
</div> | 48_10_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md | https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-for-controlnet | .md | </figcaption>
</div>
</div>
Hopefully, you will find Marigold useful for solving your downstream tasks, be it a part of a more broad generative workflow, or a perception task, such as 3D reconstruction. | 48_10_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 49_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 49_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#textual-inversion | .md | [[open-in-colab]]
The [`StableDiffusionPipeline`] supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the [Stable Diffusion Conceptualizer](https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer). | 49_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#textual-inversion | .md | This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you're interested in teaching a model new concepts with textual inversion, take a look at the [Textual Inversion](../training/text_inversion) training guide.
Import the necessary libraries:
```py
import torch
from diffusers import StableDiffusionPipeline
from diffusers.utils import make_image_grid
``` | 49_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#stable-diffusion-1-and-2 | .md | Pick a Stable Diffusion checkpoint and a pre-learned concept from the [Stable Diffusion Conceptualizer](https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer):
```py
pretrained_model_name_or_path = "stable-diffusion-v1-5/stable-diffusion-v1-5"
repo_id_embeds = "sd-concepts-library/cat-toy"
```
Now you can load a pipeline, and pass the pre-learned concept to it:
```py
pipeline = StableDiffusionPipeline.from_pretrained( | 49_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#stable-diffusion-1-and-2 | .md | ```py
pipeline = StableDiffusionPipeline.from_pretrained(
pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True
).to("cuda") | 49_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#stable-diffusion-1-and-2 | .md | pipeline.load_textual_inversion(repo_id_embeds)
```
Create a prompt with the pre-learned concept by using the special placeholder token `<cat-toy>`, and choose the number of samples and rows of images you'd like to generate:
```py
prompt = "a grafitti in a favela wall with a <cat-toy> on it" | 49_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#stable-diffusion-1-and-2 | .md | num_samples_per_row = 2
num_rows = 2
```
Then run the pipeline (feel free to adjust the parameters like `num_inference_steps` and `guidance_scale` to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning:
```py
all_images = []
for _ in range(num_rows):
images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images
all_images.extend(images) | 49_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#stable-diffusion-1-and-2 | .md | grid = make_image_grid(all_images, num_rows, num_samples_per_row)
grid
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/textual_inversion_inference.png">
</div> | 49_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#stable-diffusion-xl | .md | Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you'll need two textual inversion embeddings - one for each text encoder model.
Let's download the SDXL textual inversion embeddings and have a closer look at it's structure:
```py
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file | 49_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#stable-diffusion-xl | .md | file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors")
state_dict = load_file(file)
state_dict
```
```
{'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275],
...,
[-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]],
'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011],
...,
[ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]],
```
There are two tensors, `"clip_g"` and `"clip_l"`. | 49_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#stable-diffusion-xl | .md | ...,
[ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]],
```
There are two tensors, `"clip_g"` and `"clip_l"`.
`"clip_g"` corresponds to the bigger text encoder in SDXL and refers to
`pipe.text_encoder_2` and `"clip_l"` refers to `pipe.text_encoder`.
Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer
to [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`]:
```py
from diffusers import AutoPipelineForText2Image
import torch | 49_3_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#stable-diffusion-xl | .md | pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16)
pipe.to("cuda")
pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) | 49_3_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/textual_inversion_inference.md | https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference/#stable-diffusion-xl | .md | # the embedding should be used as a negative embedding, so we pass it as a negative prompt
generator = torch.Generator().manual_seed(33)
image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0]
image
``` | 49_3_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 50_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 50_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter | .md | [IP-Adapter](https://hf.co/papers/2308.06721) is an image prompt adapter that can be plugged into diffusion models to enable image prompting without any changes to the underlying model. Furthermore, this adapter can be reused with other models finetuned from the same base model and it can be combined with other adapters like [ControlNet](../using-diffusers/controlnet). The key idea behind IP-Adapter is the *decoupled cross-attention* mechanism which adds a separate cross-attention layer just for image | 50_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter | .md | idea behind IP-Adapter is the *decoupled cross-attention* mechanism which adds a separate cross-attention layer just for image features instead of using the same cross-attention layer for both text and image features. This allows the model to learn more image-specific features. | 50_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter | .md | > [!TIP]
> Learn how to load an IP-Adapter in the [Load adapters](../using-diffusers/loading_adapters#ip-adapter) guide, and make sure you check out the [IP-Adapter Plus](../using-diffusers/loading_adapters#ip-adapter-plus) section which requires manually loading the image encoder.
This guide will walk you through using IP-Adapter for various tasks and use cases. | 50_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | Let's take a look at how to use IP-Adapter's image prompting capabilities with the [`StableDiffusionXLPipeline`] for tasks like text-to-image, image-to-image, and inpainting. We also encourage you to try out other pipelines such as Stable Diffusion, LCM-LoRA, ControlNet, T2I-Adapter, or AnimateDiff! | 50_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | In all the following examples, you'll see the [`~loaders.IPAdapterMixin.set_ip_adapter_scale`] method. This method controls the amount of text or image conditioning to apply to the model. A value of `1.0` means the model is only conditioned on the image prompt. Lowering this value encourages the model to produce more diverse images, but they may not be as aligned with the image prompt. Typically, a value of `0.5` achieves a good balance between the two prompt types and produces good results.
> [!TIP] | 50_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | > [!TIP]
> In the examples below, try adding `low_cpu_mem_usage=True` to the [`~loaders.IPAdapterMixin.load_ip_adapter`] method to speed up the loading time.
<hfoptions id="tasks">
<hfoption id="Text-to-image">
Crafting the precise text prompt to generate the image you want can be difficult because it may not always capture what you'd like to express. Adding an image alongside the text prompt helps the model better understand what it should generate and can lead to more accurate results. | 50_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | Load a Stable Diffusion XL (SDXL) model and insert an IP-Adapter into the model with the [`~loaders.IPAdapterMixin.load_ip_adapter`] method. Use the `subfolder` parameter to load the SDXL model weights.
```py
from diffusers import AutoPipelineForText2Image
from diffusers.utils import load_image
import torch | 50_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda")
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin")
pipeline.set_ip_adapter_scale(0.6)
```
Create a text prompt and load an image prompt before passing them to the pipeline to generate an image.
```py | 50_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | ```
Create a text prompt and load an image prompt before passing them to the pipeline to generate an image.
```py
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png")
generator = torch.Generator(device="cpu").manual_seed(0)
images = pipeline(
prompt="a polar bear sitting in a chair drinking a milkshake",
ip_adapter_image=image, | 50_2_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | images = pipeline(
prompt="a polar bear sitting in a chair drinking a milkshake",
ip_adapter_image=image,
negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality",
num_inference_steps=100,
generator=generator,
).images
images[0]
```
<div class="flex flex-row gap-4">
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png"/> | 50_2_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image</figcaption>
</div>
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner_2.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
</div>
</div>
</hfoption>
<hfoption id="Image-to-image"> | 50_2_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | </div>
</div>
</hfoption>
<hfoption id="Image-to-image">
IP-Adapter can also help with image-to-image by guiding the model to generate an image that resembles the original image and the image prompt.
Load a Stable Diffusion XL (SDXL) model and insert an IP-Adapter into the model with the [`~loaders.IPAdapterMixin.load_ip_adapter`] method. Use the `subfolder` parameter to load the SDXL model weights.
```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image | 50_2_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | ```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image
import torch | 50_2_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | pipeline = AutoPipelineForImage2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda")
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin")
pipeline.set_ip_adapter_scale(0.6)
```
Pass the original image and the IP-Adapter image prompt to the pipeline to generate an image. Providing a text prompt to the pipeline is optional, but in this example, a text prompt is used to increase image quality.
```py | 50_2_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | ```py
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_1.png")
ip_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_2.png") | 50_2_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | generator = torch.Generator(device="cpu").manual_seed(4)
images = pipeline(
prompt="best quality, high quality",
image=image,
ip_adapter_image=ip_image,
generator=generator,
strength=0.6,
).images
images[0]
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_1.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div> | 50_2_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | <figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_2.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_3.png"/> | 50_2_13 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
</div>
</div>
</hfoption>
<hfoption id="Inpainting">
IP-Adapter is also useful for inpainting because the image prompt allows you to be much more specific about what you'd like to generate.
Load a Stable Diffusion XL (SDXL) model and insert an IP-Adapter into the model with the [`~loaders.IPAdapterMixin.load_ip_adapter`] method. Use the `subfolder` parameter to load the SDXL model weights.
```py | 50_2_14 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | ```py
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
import torch | 50_2_15 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | pipeline = AutoPipelineForInpainting.from_pretrained("diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16).to("cuda")
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin")
pipeline.set_ip_adapter_scale(0.6)
```
Pass a prompt, the original image, mask image, and the IP-Adapter image prompt to the pipeline to generate an image.
```py | 50_2_16 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | Pass a prompt, the original image, mask image, and the IP-Adapter image prompt to the pipeline to generate an image.
```py
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_mask.png")
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_1.png") | 50_2_17 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | ip_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_gummy.png") | 50_2_18 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | generator = torch.Generator(device="cpu").manual_seed(4)
images = pipeline(
prompt="a cute gummy bear waving",
image=image,
mask_image=mask_image,
ip_adapter_image=ip_image,
generator=generator,
num_inference_steps=100,
).images
images[0]
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_1.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption> | 50_2_19 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | <figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_gummy.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_inpaint.png"/> | 50_2_20 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
</div>
</div>
</hfoption>
<hfoption id="Video">
IP-Adapter can also help you generate videos that are more aligned with your text prompt. For example, let's load [AnimateDiff](../api/pipelines/animatediff) with its motion adapter and insert an IP-Adapter into the model with the [`~loaders.IPAdapterMixin.load_ip_adapter`] method.
> [!WARNING] | 50_2_21 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | > [!WARNING]
> If you're planning on offloading the model to the CPU, make sure you run it after you've loaded the IP-Adapter. When you call [`~DiffusionPipeline.enable_model_cpu_offload`] before loading the IP-Adapter, it offloads the image encoder module to the CPU and it'll return an error when you try to run the pipeline.
```py
import torch
from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
from diffusers.utils import export_to_gif
from diffusers.utils import load_image | 50_2_22 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
pipeline = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter, torch_dtype=torch.float16)
scheduler = DDIMScheduler.from_pretrained(
"emilianJR/epiCRealism",
subfolder="scheduler",
clip_sample=False,
timestep_spacing="linspace",
beta_schedule="linear",
steps_offset=1,
)
pipeline.scheduler = scheduler
pipeline.enable_vae_slicing() | 50_2_23 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin")
pipeline.enable_model_cpu_offload()
```
Pass a prompt and an image prompt to the pipeline to generate a short video.
```py
ip_adapter_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_inpaint.png") | 50_2_24 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | output = pipeline(
prompt="A cute gummy bear waving",
negative_prompt="bad quality, worse quality, low resolution",
ip_adapter_image=ip_adapter_image,
num_frames=16,
guidance_scale=7.5,
num_inference_steps=50,
generator=torch.Generator(device="cpu").manual_seed(0),
)
frames = output.frames[0]
export_to_gif(frames, "gummy_bear.gif")
```
<div class="flex flex-row gap-4">
<div class="flex-1"> | 50_2_25 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | frames = output.frames[0]
export_to_gif(frames, "gummy_bear.gif")
```
<div class="flex flex-row gap-4">
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_inpaint.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image</figcaption>
</div>
<div class="flex-1"> | 50_2_26 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#general-tasks | .md | <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image</figcaption>
</div>
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gummy_bear.gif"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">generated video</figcaption>
</div>
</div>
</hfoption>
</hfoptions> | 50_2_27 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#configure-parameters | .md | There are a couple of IP-Adapter parameters that are useful to know about and can help you with your image generation tasks. These parameters can make your workflow more efficient or give you more control over image generation. | 50_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#image-embeddings | .md | IP-Adapter enabled pipelines provide the `ip_adapter_image_embeds` parameter to accept precomputed image embeddings. This is particularly useful in scenarios where you need to run the IP-Adapter pipeline multiple times because you have more than one image. For example, [multi IP-Adapter](#multi-ip-adapter) is a specific use case where you provide multiple styling images to generate a specific image in a specific style. Loading and encoding multiple images each time you use the pipeline would be inefficient. | 50_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#image-embeddings | .md | a specific image in a specific style. Loading and encoding multiple images each time you use the pipeline would be inefficient. Instead, you can precompute and save the image embeddings to disk (which can save a lot of space if you're using high-quality images) and load them when you need them. | 50_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#image-embeddings | .md | > [!TIP]
> This parameter also gives you the flexibility to load embeddings from other sources. For example, ComfyUI image embeddings for IP-Adapters are compatible with Diffusers and should work ouf-of-the-box!
Call the [`~StableDiffusionPipeline.prepare_ip_adapter_image_embeds`] method to encode and generate the image embeddings. Then you can save them to disk with `torch.save`.
> [!TIP] | 50_4_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#image-embeddings | .md | > [!TIP]
> If you're using IP-Adapter with `ip_adapter_image_embedding` instead of `ip_adapter_image`', you can set `load_ip_adapter(image_encoder_folder=None,...)` because you don't need to load an encoder to generate the image embeddings.
```py
image_embeds = pipeline.prepare_ip_adapter_image_embeds(
ip_adapter_image=image,
ip_adapter_image_embeds=None,
device="cuda",
num_images_per_prompt=1,
do_classifier_free_guidance=True,
) | 50_4_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#image-embeddings | .md | torch.save(image_embeds, "image_embeds.ipadpt")
```
Now load the image embeddings by passing them to the `ip_adapter_image_embeds` parameter.
```py
image_embeds = torch.load("image_embeds.ipadpt")
images = pipeline(
prompt="a polar bear sitting in a chair drinking a milkshake",
ip_adapter_image_embeds=image_embeds,
negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality",
num_inference_steps=100,
generator=generator,
).images
``` | 50_4_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking | .md | Binary masks specify which portion of the output image should be assigned to an IP-Adapter. This is useful for composing more than one IP-Adapter image. For each input IP-Adapter image, you must provide a binary mask. | 50_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md | https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking | .md | To start, preprocess the input IP-Adapter images with the [`~image_processor.IPAdapterMaskProcessor.preprocess()`] to generate their masks. For optimal results, provide the output height and width to [`~image_processor.IPAdapterMaskProcessor.preprocess()`]. This ensures masks with different aspect ratios are appropriately stretched. If the input masks already match the aspect ratio of the generated image, you don't have to set the `height` and `width`.
```py | 50_5_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.