text
stringlengths
0
5.54k
Molecule conformation generation
More coming soon!
Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🤗 Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors.
Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline
import torch
import tomesd
pipeline = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True,
).to("cuda")
+ tomesd.apply_patch(pipeline, ratio=0.5)
image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1
- Python version: 3.8.16
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Huggingface_hub version: 0.13.2
- Transformers version: 4.27.2
- Accelerate version: 0.18.0
- xFormers version: 0.0.16
- tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile.
Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🤗 Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors.
Installation 🤗 Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: PyTorch installation instructions Flax installation instructions Install with pip You should install 🤗 Diffusers in a virtual environment.
If you’re unfamiliar with Python virtual environments, take a look at this guide.
A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: Copied python -m venv .env Activate the virtual environment: Copied source .env/bin/activate You should also install 🤗 Transformers because 🤗 Diffusers relies on its models: Pytorch Hide Pytorch content Note - PyTorch only supports Python 3.8 - 3.11 on Windows. Copied pip install diffusers["torch"] transformers JAX Hide JAX content Copied pip install diffusers["flax"] transformers Install with conda After activating your virtual environment, with conda (maintained by the community): Copied conda install -c conda-forge diffusers Install from source Before installing 🤗 Diffusers from source, make sure you have PyTorch and 🤗 Accelerate installed. To install 🤗 Accelerate: Copied pip install accelerate Then install 🤗 Diffusers from source: Copied pip install git+https://github.com/huggingface/diffusers This command installs the bleeding edge main version rather than the latest stable version.
The main version is useful for staying up-to-date with the latest developments.
For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet.
However, this means the main version may not always be stable.
We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day.
If you run into a problem, please open an Issue so we can fix it even sooner! Editable install You will need an editable install if you’d like to: Use the main version of the source code. Contribute to 🤗 Diffusers and need to test changes in the code. Clone the repository and install 🤗 Diffusers with the following commands: Copied git clone https://github.com/huggingface/diffusers.git
cd diffusers Pytorch Hide Pytorch content Copied pip install -e ".[torch]" JAX Hide JAX content Copied pip install -e ".[flax]" These commands will link the folder you cloned the repository to and your Python library paths.
Python will now look inside the folder you cloned to in addition to the normal library paths.
For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.8/site-packages/, Python will also search the ~/diffusers/ folder you cloned to. You must keep the diffusers folder if you want to keep using the library. Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: Copied cd ~/diffusers/
git pull Your Python environment will find the main version of 🤗 Diffusers on the next run. Cache Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the HF_HOME or HUGGINFACE_HUB_CACHE environment variables or configuring the cache_dir parameter in methods like from_pretrained(). Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the HF_HUB_OFFLINE environment variable to True and 🤗 Diffusers will only load previously downloaded files in the cache. Copied export HF_HUB_OFFLINE=True For more details about managing and cleaning the cache, take a look at the caching guide. Telemetry logging Our library gathers telemetry information during from_pretrained() requests.
The data gathered includes the version of 🤗 Diffusers and PyTorch/Flax, the requested model or pipeline class,
and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub.
This usage data helps us debug issues and prioritize new features.
Telemetry is only sent when loading models and pipelines from the Hub,
and it is not collected if you’re loading local files. We understand that not everyone wants to share additional information,and we respect your privacy.
You can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: On Linux/MacOS: Copied export DISABLE_TELEMETRY=YES On Windows: Copied set DISABLE_TELEMETRY=YES
Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch
from diffusers import DiffusionPipeline, AutoencoderTiny
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image To use with Stable Diffusion XL 1.0 Copied import torch
from diffusers import DiffusionPipeline, AutoencoderTiny
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) —
Tuple of integers representing the number of output channels for each encoder block. The length of the
tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) —
Tuple of integers representing the number of output channels for each decoder block. The length of the
tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") —
Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) —
Number of channels in the latent representation. The latent space acts as a compressed representation of
the input image. upsampling_scaling_factor (int, optional, defaults to 2) —
Scaling factor for upsampling in the decoder. It determines the size of the output image during the
upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) —
Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The
length of the tuple should be equal to the number of stages in the encoder. Each stage has a different
number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) —
Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The
length of the tuple should be equal to the number of stages in the decoder. Each stage has a different
number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) —
Magnitude of the latent representation. This parameter scales the latent representation values to control
the extent of information preservation. latent_shift (float, optional, defaults to 0.5) —
Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) —
The component-wise standard deviation of the trained latent space computed using the first batch of the
training set. This is used to scale the latent space to have unit variance when training the diffusion
model. The latents are scaled with the formula z = z * scaling_factor before being passed to the
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
Synthesis with Latent Diffusion Models paper. For this Autoencoder,
however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) —
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
can be fine-tuned / trained to a lower range without losing too much precision, in which case
force_upcast can be set to False (see this fp16-friendly
AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for
all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing
decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing
decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) —
Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) — Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method.
Installing xFormers