source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
```py from diffusers.image_processor import IPAdapterMaskProcessor
50_5_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
mask1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask1.png") mask2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask2.png") output_height = 1024 output_width = 1024
50_5_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
processor = IPAdapterMaskProcessor() masks = processor.preprocess([mask1, mask2], height=output_height, width=output_width) ``` <div class="flex flex-row gap-4"> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_mask1.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">mask one</figcaption> </div> <div class="flex-1">
50_5_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">mask one</figcaption> </div> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_mask2.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">mask two</figcaption> </div> </div>
50_5_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">mask two</figcaption> </div> </div> When there is more than one input IP-Adapter image, load them as a list and provide the IP-Adapter scale list. Each of the input IP-Adapter images here corresponds to one of the masks generated above. ```py pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name=["ip-adapter-plus-face_sdxl_vit-h.safetensors"])
50_5_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
pipeline.set_ip_adapter_scale([[0.7, 0.7]]) # one scale for each image-mask pair
50_5_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
face_image1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png") face_image2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl2.png") ip_images = [[face_image1, face_image2]]
50_5_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
masks = [masks.reshape(1, masks.shape[0], masks.shape[2], masks.shape[3])] ``` <div class="flex flex-row gap-4"> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_girl1.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image one</figcaption> </div> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_girl2.png"/>
50_5_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_girl2.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image two</figcaption> </div> </div> Now pass the preprocessed masks to `cross_attention_kwargs` in the pipeline call. ```py generator = torch.Generator(device="cpu").manual_seed(0) num_images = 1
50_5_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
image = pipeline( prompt="2 girls", ip_adapter_image=ip_images, negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", num_inference_steps=20, num_images_per_prompt=num_images, generator=generator, cross_attention_kwargs={"ip_adapter_masks": masks} ).images[0] image ``` <div class="flex flex-row gap-4"> <div class="flex-1">
50_5_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
).images[0] image ``` <div class="flex flex-row gap-4"> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_attention_mask_result_seed_0.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter masking applied</figcaption> </div> <div class="flex-1">
50_5_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#ip-adapter-masking
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter masking applied</figcaption> </div> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_no_attention_mask_result_seed_0.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">no IP-Adapter masking applied</figcaption> </div> </div>
50_5_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#specific-use-cases
.md
IP-Adapter's image prompting and compatibility with other adapters and models makes it a versatile tool for a variety of use cases. This section covers some of the more popular applications of IP-Adapter, and we can't wait to see what you come up with!
50_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
Generating accurate faces is challenging because they are complex and nuanced. Diffusers supports two IP-Adapter checkpoints specifically trained to generate faces from the [h94/IP-Adapter](https://huggingface.co/h94/IP-Adapter) repository: * [ip-adapter-full-face_sd15.safetensors](https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-full-face_sd15.safetensors) is conditioned with images of cropped faces and removed backgrounds
50_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
* [ip-adapter-plus-face_sd15.safetensors](https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-plus-face_sd15.safetensors) uses patch embeddings and is conditioned with images of cropped faces Additionally, Diffusers supports all IP-Adapter checkpoints trained with face embeddings extracted by `insightface` face models. Supported models are from the [h94/IP-Adapter-FaceID](https://huggingface.co/h94/IP-Adapter-FaceID) repository.
50_7_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
For face models, use the [h94/IP-Adapter](https://huggingface.co/h94/IP-Adapter) checkpoint. It is also recommended to use [`DDIMScheduler`] or [`EulerDiscreteScheduler`] for face models. ```py import torch from diffusers import StableDiffusionPipeline, DDIMScheduler from diffusers.utils import load_image
50_7_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
pipeline = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, ).to("cuda") pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") pipeline.set_ip_adapter_scale(0.5)
50_7_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
pipeline.set_ip_adapter_scale(0.5) image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_einstein_base.png") generator = torch.Generator(device="cpu").manual_seed(26)
50_7_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
image = pipeline( prompt="A photo of Einstein as a chef, wearing an apron, cooking in a French restaurant", ip_adapter_image=image, negative_prompt="lowres, bad anatomy, worst quality, low quality", num_inference_steps=100, generator=generator, ).images[0] image ``` <div class="flex flex-row gap-4"> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_einstein_base.png"/>
50_7_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image</figcaption> </div> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_einstein.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption> </div> </div>
50_7_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption> </div> </div> To use IP-Adapter FaceID models, first extract face embeddings with `insightface`. Then pass the list of tensors to the pipeline as `ip_adapter_image_embeds`. ```py import torch from diffusers import StableDiffusionPipeline, DDIMScheduler from diffusers.utils import load_image from insightface.app import FaceAnalysis
50_7_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
pipeline = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, ).to("cuda") pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) pipeline.load_ip_adapter("h94/IP-Adapter-FaceID", subfolder=None, weight_name="ip-adapter-faceid_sd15.bin", image_encoder_folder=None) pipeline.set_ip_adapter_scale(0.6)
50_7_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png")
50_7_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
ref_images_embeds = [] app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) app.prepare(ctx_id=0, det_size=(640, 640)) image = cv2.cvtColor(np.asarray(image), cv2.COLOR_BGR2RGB) faces = app.get(image) image = torch.from_numpy(faces[0].normed_embedding) ref_images_embeds.append(image.unsqueeze(0)) ref_images_embeds = torch.stack(ref_images_embeds, dim=0).unsqueeze(0) neg_ref_images_embeds = torch.zeros_like(ref_images_embeds)
50_7_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
neg_ref_images_embeds = torch.zeros_like(ref_images_embeds) id_embeds = torch.cat([neg_ref_images_embeds, ref_images_embeds]).to(dtype=torch.float16, device="cuda")
50_7_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
generator = torch.Generator(device="cpu").manual_seed(42)
50_7_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
images = pipeline( prompt="A photo of a girl", ip_adapter_image_embeds=[id_embeds], negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", num_inference_steps=20, num_images_per_prompt=1, generator=generator ).images ``` Both IP-Adapter FaceID Plus and Plus v2 models require CLIP image embeddings. You can prepare face embeddings as shown previously, then you can extract and pass CLIP embeddings to the hidden image projection layers. ```py
50_7_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
```py from insightface.utils import face_align
50_7_14
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
ref_images_embeds = [] ip_adapter_images = [] app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) app.prepare(ctx_id=0, det_size=(640, 640)) image = cv2.cvtColor(np.asarray(image), cv2.COLOR_BGR2RGB) faces = app.get(image) ip_adapter_images.append(face_align.norm_crop(image, landmark=faces[0].kps, image_size=224)) image = torch.from_numpy(faces[0].normed_embedding) ref_images_embeds.append(image.unsqueeze(0))
50_7_15
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
image = torch.from_numpy(faces[0].normed_embedding) ref_images_embeds.append(image.unsqueeze(0)) ref_images_embeds = torch.stack(ref_images_embeds, dim=0).unsqueeze(0) neg_ref_images_embeds = torch.zeros_like(ref_images_embeds) id_embeds = torch.cat([neg_ref_images_embeds, ref_images_embeds]).to(dtype=torch.float16, device="cuda")
50_7_16
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#face-model
.md
clip_embeds = pipeline.prepare_ip_adapter_image_embeds( [ip_adapter_images], None, torch.device("cuda"), num_images, True)[0] pipeline.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16) pipeline.unet.encoder_hid_proj.image_projection_layers[0].shortcut = False # True if Plus v2 ```
50_7_17
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#multi-ip-adapter
.md
More than one IP-Adapter can be used at the same time to generate specific images in more diverse styles. For example, you can use IP-Adapter-Face to generate consistent faces and characters, and IP-Adapter Plus to generate those faces in a specific style. > [!TIP] > Read the [IP-Adapter Plus](../using-diffusers/loading_adapters#ip-adapter-plus) section to learn why you need to manually load the image encoder. Load the image encoder with [`~transformers.CLIPVisionModelWithProjection`]. ```py
50_8_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#multi-ip-adapter
.md
Load the image encoder with [`~transformers.CLIPVisionModelWithProjection`]. ```py import torch from diffusers import AutoPipelineForText2Image, DDIMScheduler from transformers import CLIPVisionModelWithProjection from diffusers.utils import load_image
50_8_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#multi-ip-adapter
.md
image_encoder = CLIPVisionModelWithProjection.from_pretrained( "h94/IP-Adapter", subfolder="models/image_encoder", torch_dtype=torch.float16, ) ``` Next, you'll load a base model, scheduler, and the IP-Adapters. The IP-Adapters to use are passed as a list to the `weight_name` parameter: * [ip-adapter-plus_sdxl_vit-h](https://huggingface.co/h94/IP-Adapter#ip-adapter-for-sdxl-10) uses patch embeddings and a ViT-H image encoder
50_8_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#multi-ip-adapter
.md
* [ip-adapter-plus-face_sdxl_vit-h](https://huggingface.co/h94/IP-Adapter#ip-adapter-for-sdxl-10) has the same architecture but it is conditioned with images of cropped faces ```py pipeline = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, image_encoder=image_encoder, ) pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) pipeline.load_ip_adapter( "h94/IP-Adapter", subfolder="sdxl_models",
50_8_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#multi-ip-adapter
.md
pipeline.load_ip_adapter( "h94/IP-Adapter", subfolder="sdxl_models", weight_name=["ip-adapter-plus_sdxl_vit-h.safetensors", "ip-adapter-plus-face_sdxl_vit-h.safetensors"] ) pipeline.set_ip_adapter_scale([0.7, 0.3]) pipeline.enable_model_cpu_offload() ``` Load an image prompt and a folder containing images of a certain style you want to use. ```py face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png")
50_8_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#multi-ip-adapter
.md
```py face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png") style_folder = "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy" style_images = [load_image(f"{style_folder}/img{i}.png") for i in range(10)] ``` <div class="flex flex-row gap-4"> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png"/>
50_8_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#multi-ip-adapter
.md
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image of face</figcaption> </div> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_style_grid.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter style images</figcaption> </div> </div>
50_8_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#multi-ip-adapter
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter style images</figcaption> </div> </div> Pass the image prompt and style images as a list to the `ip_adapter_image` parameter, and run the pipeline! ```py generator = torch.Generator(device="cpu").manual_seed(0)
50_8_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#multi-ip-adapter
.md
image = pipeline( prompt="wonderwoman", ip_adapter_image=[style_images, face_image], negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", num_inference_steps=50, num_images_per_prompt=1, generator=generator, ).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_multi_out.png" /> </div>
50_8_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#instant-generation
.md
[Latent Consistency Models (LCM)](../using-diffusers/inference_with_lcm_lora) are diffusion models that can generate images in as little as 4 steps compared to other diffusion models like SDXL that typically require way more steps. This is why image generation with an LCM feels "instantaneous". IP-Adapters can be plugged into an LCM-LoRA model to instantly generate images with an image prompt.
50_9_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#instant-generation
.md
The IP-Adapter weights need to be loaded first, then you can use [`~StableDiffusionPipeline.load_lora_weights`] to load the LoRA style and weight you want to apply to your image. ```py from diffusers import DiffusionPipeline, LCMScheduler import torch from diffusers.utils import load_image
50_9_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#instant-generation
.md
model_id = "sd-dreambooth-library/herge-style" lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
50_9_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#instant-generation
.md
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") pipeline.load_lora_weights(lcm_lora_id) pipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config) pipeline.enable_model_cpu_offload() ```
50_9_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#instant-generation
.md
pipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config) pipeline.enable_model_cpu_offload() ``` Try using with a lower IP-Adapter scale to condition image generation more on the [herge_style](https://huggingface.co/sd-dreambooth-library/herge-style) checkpoint, and remember to use the special token `herge_style` in your prompt to trigger and apply the style. ```py pipeline.set_ip_adapter_scale(0.4)
50_9_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#instant-generation
.md
prompt = "herge_style woman in armor, best quality, high quality" generator = torch.Generator(device="cpu").manual_seed(0)
50_9_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#instant-generation
.md
ip_adapter_image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") image = pipeline( prompt=prompt, ip_adapter_image=ip_adapter_image, num_inference_steps=4, guidance_scale=1, ).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_herge.png" /> </div>
50_9_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#structural-control
.md
To control image generation to an even greater degree, you can combine IP-Adapter with a model like [ControlNet](../using-diffusers/controlnet). A ControlNet is also an adapter that can be inserted into a diffusion model to allow for conditioning on an additional control image. The control image can be depth maps, edge maps, pose estimations, and more. Load a [`ControlNetModel`] checkpoint conditioned on depth maps, insert it into a diffusion model, and load the IP-Adapter. ```py
50_10_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#structural-control
.md
```py from diffusers import StableDiffusionControlNetPipeline, ControlNetModel import torch from diffusers.utils import load_image
50_10_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#structural-control
.md
controlnet_model_path = "lllyasviel/control_v11f1p_sd15_depth" controlnet = ControlNetModel.from_pretrained(controlnet_model_path, torch_dtype=torch.float16)
50_10_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#structural-control
.md
pipeline = StableDiffusionControlNetPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16) pipeline.to("cuda") pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") ``` Now load the IP-Adapter image and depth map. ```py ip_adapter_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png")
50_10_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#structural-control
.md
```py ip_adapter_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png") depth_map = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png") ``` <div class="flex flex-row gap-4"> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image</figcaption> </div> <div class="flex-1">
50_10_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#structural-control
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image</figcaption> </div> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">depth map</figcaption> </div> </div> Pass the depth map and IP-Adapter image to the pipeline to generate an image. ```py generator = torch.Generator(device="cpu").manual_seed(33) image = pipeline(
50_10_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#structural-control
.md
```py generator = torch.Generator(device="cpu").manual_seed(33) image = pipeline( prompt="best quality, high quality", image=depth_map, ip_adapter_image=ip_adapter_image, negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", num_inference_steps=50, generator=generator, ).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ipa-controlnet-out.png" /> </div>
50_10_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
[InstantStyle](https://arxiv.org/abs/2404.02733) is a plug-and-play method on top of IP-Adapter, which disentangles style and layout from image prompt to control image generation. This way, you can generate images following only the style or layout from image prompt, with significantly improved diversity. This is achieved by only activating IP-Adapters to specific parts of the model.
50_11_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
By default IP-Adapters are inserted to all layers of the model. Use the [`~loaders.IPAdapterMixin.set_ip_adapter_scale`] method with a dictionary to assign scales to IP-Adapter at different layers. ```py from diffusers import AutoPipelineForText2Image from diffusers.utils import load_image import torch
50_11_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin")
50_11_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
scale = { "down": {"block_2": [0.0, 1.0]}, "up": {"block_0": [0.0, 1.0, 0.0]}, } pipeline.set_ip_adapter_scale(scale) ``` This will activate IP-Adapter at the second layer in the model's down-part block 2 and up-part block 0. The former is the layer where IP-Adapter injects layout information and the latter injects style. Inserting IP-Adapter to these two layers you can generate images following both the style and layout from image prompt, but with contents more aligned to text prompt. ```py
50_11_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
```py style_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg")
50_11_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
generator = torch.Generator(device="cpu").manual_seed(26) image = pipeline( prompt="a cat, masterpiece, best quality, high quality", ip_adapter_image=style_image, negative_prompt="text, watermark, lowres, low quality, worst quality, deformed, glitch, low contrast, noisy, saturation, blurry", guidance_scale=5, num_inference_steps=30, generator=generator, ).images[0] image ``` <div class="flex flex-row gap-4"> <div class="flex-1">
50_11_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
num_inference_steps=30, generator=generator, ).images[0] image ``` <div class="flex flex-row gap-4"> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"/> <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image</figcaption> </div> <div class="flex-1">
50_11_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter image</figcaption> </div> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption> </div> </div> In contrast, inserting IP-Adapter to all layers will often generate images that overly focus on image prompt and diminish diversity.
50_11_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
Activate IP-Adapter only in the style layer and then call the pipeline again. ```py scale = { "up": {"block_0": [0.0, 1.0, 0.0]}, } pipeline.set_ip_adapter_scale(scale)
50_11_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
generator = torch.Generator(device="cpu").manual_seed(26) image = pipeline( prompt="a cat, masterpiece, best quality, high quality", ip_adapter_image=style_image, negative_prompt="text, watermark, lowres, low quality, worst quality, deformed, glitch, low contrast, noisy, saturation, blurry", guidance_scale=5, num_inference_steps=30, generator=generator, ).images[0] image ``` <div class="flex flex-row gap-4"> <div class="flex-1">
50_11_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
num_inference_steps=30, generator=generator, ).images[0] image ``` <div class="flex flex-row gap-4"> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_only.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter only in style layer</figcaption> </div> <div class="flex-1">
50_11_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/ip_adapter.md
https://huggingface.co/docs/diffusers/en/using-diffusers/ip_adapter/#style--layout-control
.md
</div> <div class="flex-1"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_ip_adapter.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">IP-Adapter in all layers</figcaption> </div> </div> Note that you don't have to specify all layers in the dictionary. Those not included in the dictionary will be set to scale 0 which means disable IP-Adapter by default.
50_11_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
51_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
51_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-techniques
.md
[[open-in-colab]] Prompts are important because they describe what you want a diffusion model to generate. The best prompts are detailed, specific, and well-structured to help the model realize your vision. But crafting a great prompt takes time and effort and sometimes it may not be enough because language and words can be imprecise. This is where you need to boost your prompt with other techniques, such as prompt enhancing and prompt weighting, to get the results you want.
51_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-techniques
.md
This guide will show you how you can use these prompt techniques to generate high-quality images with lower effort and adjust the weight of certain keywords in a prompt.
51_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-engineering
.md
> [!TIP] > This is not an exhaustive guide on prompt engineering, but it will help you understand the necessary parts of a good prompt. We encourage you to continue experimenting with different prompts and combine them in new ways to see what works best. As you write more prompts, you'll develop an intuition for what works and what doesn't!
51_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-engineering
.md
New diffusion models do a pretty good job of generating high-quality images from a basic prompt, but it is still important to create a well-written prompt to get the best results. Here are a few tips for writing a good prompt: 1. What is the image *medium*? Is it a photo, a painting, a 3D illustration, or something else? 2. What is the image *subject*? Is it a person, animal, object, or scene?
51_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-engineering
.md
2. What is the image *subject*? Is it a person, animal, object, or scene? 3. What *details* would you like to see in the image? This is where you can get really creative and have a lot of fun experimenting with different words to bring your image to life. For example, what is the lighting like? What is the vibe and aesthetic? What kind of art or illustration style are you looking for? The more specific and precise words you use, the better the model will understand what you want to generate.
51_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-engineering
.md
<div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/plain-prompt.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">"A photo of a banana-shaped couch in a living room"</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/detail-prompt.png"/>
51_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-engineering
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">"A vibrant yellow banana-shaped couch sits in a cozy living room, its curve cradling a pile of colorful cushions. on the wooden floor, a patterned rug adds a touch of eclectic charm, and a potted plant sits in the corner, reaching towards the sunlight filtering through the windows"</figcaption> </div> </div>
51_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
Prompt enhancing is a technique for quickly improving prompt quality without spending too much effort constructing one. It uses a model like GPT2 pretrained on Stable Diffusion text prompts to automatically enrich a prompt with additional important keywords to generate high-quality images.
51_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
The technique works by curating a list of specific keywords and forcing the model to generate those words to enhance the original prompt. This way, your prompt can be "a cat" and GPT2 can enhance the prompt to "cinematic film still of a cat basking in the sun on a roof in Turkey, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain quality sharp focus beautiful detailed intricate stunning amazing epic". > [!TIP]
51_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
> [!TIP] > You should also use a [*offset noise*](https://www.crosslabs.org//blog/diffusion-with-offset-noise) LoRA to improve the contrast in bright and dark images and create better lighting overall. This [LoRA](https://hf.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_offset_example-lora_1.0.safetensors) is available from [stabilityai/stable-diffusion-xl-base-1.0](https://hf.co/stabilityai/stable-diffusion-xl-base-1.0).
51_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
Start by defining certain styles and a list of words (you can check out a more comprehensive list of [words](https://hf.co/LykosAI/GPT-Prompt-Expansion-Fooocus-v2/blob/main/positive.txt) and [styles](https://github.com/lllyasviel/Fooocus/tree/main/sdxl_styles) used by Fooocus) to enhance a prompt with. ```py import torch from transformers import GenerationConfig, GPT2LMHeadModel, GPT2Tokenizer, LogitsProcessor, LogitsProcessorList from diffusers import StableDiffusionXLPipeline
51_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
styles = { "cinematic": "cinematic film still of {prompt}, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain", "anime": "anime artwork of {prompt}, anime style, key visual, vibrant, studio anime, highly detailed", "photographic": "cinematic photo of {prompt}, 35mm photograph, film, professional, 4k, highly detailed", "comic": "comic of {prompt}, graphic illustration, comic art, graphic novel art, vibrant, highly detailed",
51_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
"comic": "comic of {prompt}, graphic illustration, comic art, graphic novel art, vibrant, highly detailed", "lineart": "line art drawing {prompt}, professional, sleek, modern, minimalist, graphic, line art, vector graphics", "pixelart": " pixel-art {prompt}, low-res, blocky, pixel art style, 8-bit graphics", }
51_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
words = [ "aesthetic", "astonishing", "beautiful", "breathtaking", "composition", "contrasted", "epic", "moody", "enhanced", "exceptional", "fascinating", "flawless", "glamorous", "glorious", "illumination", "impressive", "improved", "inspirational", "magnificent", "majestic", "hyperrealistic", "smooth", "sharp", "focus", "stunning", "detailed", "intricate", "dramatic", "high", "quality", "perfect", "light", "ultra", "highly", "radiant", "satisfying",
51_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
"intricate", "dramatic", "high", "quality", "perfect", "light", "ultra", "highly", "radiant", "satisfying", "soothing", "sophisticated", "stylish", "sublime", "terrific", "touching", "timeless", "wonderful", "unbelievable", "elegant", "awesome", "amazing", "dynamic", "trendy", ] ```
51_3_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
"elegant", "awesome", "amazing", "dynamic", "trendy", ] ``` You may have noticed in the `words` list, there are certain words that can be paired together to create something more meaningful. For example, the words "high" and "quality" can be combined to create "high quality". Let's pair these words together and remove the words that can't be paired. ```py word_pairs = ["highly detailed", "high quality", "enhanced quality", "perfect composition", "dynamic light"]
51_3_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
def find_and_order_pairs(s, pairs): words = s.split() found_pairs = [] for pair in pairs: pair_words = pair.split() if pair_words[0] in words and pair_words[1] in words: found_pairs.append(pair) words.remove(pair_words[0]) words.remove(pair_words[1])
51_3_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
for word in words[:]: for pair in pairs: if word in pair.split(): words.remove(word) break ordered_pairs = ", ".join(found_pairs) remaining_s = ", ".join(words) return ordered_pairs, remaining_s ```
51_3_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
break ordered_pairs = ", ".join(found_pairs) remaining_s = ", ".join(words) return ordered_pairs, remaining_s ``` Next, implement a custom [`~transformers.LogitsProcessor`] class that assigns tokens in the `words` list a value of 0 and assigns tokens not in the `words` list a negative value so they aren't picked during generation. This way, generation is biased towards words in the `words` list. After a word from the list is used, it is also assigned a negative value so it isn't picked again. ```py
51_3_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
```py class CustomLogitsProcessor(LogitsProcessor): def __init__(self, bias): super().__init__() self.bias = bias
51_3_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
def __call__(self, input_ids, scores): if len(input_ids.shape) == 2: last_token_id = input_ids[0, -1] self.bias[last_token_id] = -1e10 return scores + self.bias
51_3_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
word_ids = [tokenizer.encode(word, add_prefix_space=True)[0] for word in words] bias = torch.full((tokenizer.vocab_size,), -float("Inf")).to("cuda") bias[word_ids] = 0 processor = CustomLogitsProcessor(bias) processor_list = LogitsProcessorList([processor]) ``` Combine the prompt and the `cinematic` style prompt defined in the `styles` dictionary earlier. ```py prompt = "a cat basking in the sun on a roof in Turkey" style = "cinematic"
51_3_14
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
prompt = styles[style].format(prompt=prompt) prompt "cinematic film still of a cat basking in the sun on a roof in Turkey, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain" ``` Load a GPT2 tokenizer and model from the [Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion) checkpoint (this specific checkpoint is trained to generate prompts) to enhance the prompt. ```py
51_3_15
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
```py tokenizer = GPT2Tokenizer.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion") model = GPT2LMHeadModel.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion", torch_dtype=torch.float16).to( "cuda" ) model.eval()
51_3_16
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
inputs = tokenizer(prompt, return_tensors="pt").to("cuda") token_count = inputs["input_ids"].shape[1] max_new_tokens = 50 - token_count generation_config = GenerationConfig( penalty_alpha=0.7, top_k=50, eos_token_id=model.config.eos_token_id, pad_token_id=model.config.eos_token_id, pad_token=model.config.pad_token_id, do_sample=True, )
51_3_17
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
with torch.no_grad(): generated_ids = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=max_new_tokens, generation_config=generation_config, logits_processor=proccesor_list, ) ``` Then you can combine the input prompt and the generated prompt. Feel free to take a look at what the generated prompt (`generated_part`) is, the word pairs that were found (`pairs`), and the remaining words (`words`). This is all packed together in the `enhanced_prompt`.
51_3_18
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
```py output_tokens = [tokenizer.decode(generated_id, skip_special_tokens=True) for generated_id in generated_ids] input_part, generated_part = output_tokens[0][: len(prompt)], output_tokens[0][len(prompt) :] pairs, words = find_and_order_pairs(generated_part, word_pairs) formatted_generated_part = pairs + ", " + words enhanced_prompt = input_part + ", " + formatted_generated_part enhanced_prompt
51_3_19
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
formatted_generated_part = pairs + ", " + words enhanced_prompt = input_part + ", " + formatted_generated_part enhanced_prompt ["cinematic film still of a cat basking in the sun on a roof in Turkey, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain quality sharp focus beautiful detailed intricate stunning amazing epic"] ``` Finally, load a pipeline and the offset noise LoRA with a *low weight* to generate an image with the enhanced prompt. ```py
51_3_20
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
Finally, load a pipeline and the offset noise LoRA with a *low weight* to generate an image with the enhanced prompt. ```py pipeline = StableDiffusionXLPipeline.from_pretrained( "RunDiffusion/Juggernaut-XL-v9", torch_dtype=torch.float16, variant="fp16" ).to("cuda")
51_3_21
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
pipeline.load_lora_weights( "stabilityai/stable-diffusion-xl-base-1.0", weight_name="sd_xl_offset_example-lora_1.0.safetensors", adapter_name="offset", ) pipeline.set_adapters(["offset"], adapter_weights=[0.2])
51_3_22
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
image = pipeline( enhanced_prompt, width=1152, height=896, guidance_scale=7.5, num_inference_steps=25, ).images[0] image ``` <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">"a cat basking in the sun on a roof in Turkey"</figcaption> </div> <div>
51_3_23
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/weighted_prompts.md
https://huggingface.co/docs/diffusers/en/using-diffusers/weighted_prompts/#prompt-enhancing-with-gpt2
.md
</div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/enhanced-prompt.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">"cinematic film still of a cat basking in the sun on a roof in Turkey, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain"</figcaption> </div> </div>
51_3_24