text
stringlengths
0
5.54k
We recommend the use of xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption.
Starting from version 0.0.16 of xFormers, released on January 2023, installation can be easily performed using pre-built pip wheels:
Copied
pip install xformers
The xFormers PIP package requires the latest version of PyTorch (1.13.1 as of xFormers 0.0.16). If you need to use a previous version of PyTorch, then we recommend you install xFormers from source using the project instructions.
After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption, as discussed here.
According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or Dreambooth) in some GPUs. If you observe that problem, please install a development version as indicated in that comment.
Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text
encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra
learnable key and value matrices for the text encoder. CrossFrameAttnProcessor class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor < source > ( batch_size = 2 ) Cross frame attention processor. Each frame attends the first frame. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) β€”
Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) β€”
Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) β€”
The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) β€”
Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) β€”
The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) β€”
Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) β€”
Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) β€”
The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) β€”
Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) β€”
The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled
dot-product attention. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) β€”
Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) β€”
Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) β€”
The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) β€”
Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) β€”
The dropout probability to use. attention_op (Callable, optional, defaults to None) β€”
The base
operator to use
as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).
It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query,
key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently πŸ§ͺ experimental in nature and can change in future. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) β€”
The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
The number of channels in the encoder_hidden_states. rank (int, defaults to 4) β€”
The dimension of the LoRA update matrices. network_alpha (int, optional) β€”
Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) β€”
Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text
encoder. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) β€”
The hidden size of the attention layer. cross_attention_dim (int, optional) β€”
The number of channels in the encoder_hidden_states. rank (int, defaults to 4) β€”
The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) β€”
The base
operator to
use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best
operator. network_alpha (int, optional) β€”
Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) β€”
Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) β€”
The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and
attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) β€”
The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and
attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) β€”
The base
operator to
use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best
operator. Processor for implementing memory efficient attention using xFormers.
SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0. SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints!
DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab
#!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits"
target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch
from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline
pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1",
torch_dtype=torch.float16,
safety_checker=None,
use_safetensors=True,
)
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
pipeline.enable_model_cpu_offload()
pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid
img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
raw_image = load_image(img_url).resize((768, 768))
raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image
source_prompt = "a bowl of fruits"
target_prompt = "a basket of pears"
mask_image = pipeline.generate_mask(
image=raw_image,
source_prompt=source_prompt,
target_prompt=target_prompt,
)
Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline(
prompt=target_prompt,
mask_image=mask_image,
image_latents=inv_latents,
negative_prompt=source_prompt,
).images[0]
mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768))
make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the πŸ€— Transformers library: Copied import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl"