text
stringlengths
3
14.4k
source
stringclasses
273 values
url
stringlengths
47
172
source_section
stringlengths
0
95
file_type
stringclasses
1 value
id
stringlengths
3
6
Motion LoRAs are a collection of LoRAs that work with the `guoyww/animatediff-motion-adapter-v1-5-2` checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. ```python import torch from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter from diffusers.utils import export_to_gif # Load the motion adapter adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) # load SD 1.5 based finetuned model model_id = "SG161222/Realistic_Vision_V5.1_noVAE" pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) pipe.load_lora_weights( "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out" ) scheduler = DDIMScheduler.from_pretrained( model_id, subfolder="scheduler", clip_sample=False, beta_schedule="linear", timestep_spacing="linspace", steps_offset=1, ) pipe.scheduler = scheduler # enable memory savings pipe.enable_vae_slicing() pipe.enable_model_cpu_offload() output = pipe( prompt=( "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " "orange sky, warm lighting, fishing boats, ocean waves seagulls, " "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " "golden hour, coastal landscape, seaside scenery" ), negative_prompt="bad quality, worse quality", num_frames=16, guidance_scale=7.5, num_inference_steps=25, generator=torch.Generator("cpu").manual_seed(42), ) frames = output.frames[0] export_to_gif(frames, "animation.gif") ``` <table> <tr> <td><center> masterpiece, bestquality, sunset. <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-zoom-out-lora.gif" alt="masterpiece, bestquality, sunset" style="width: 300px;" /> </center></td> </tr> </table>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#using-motion-loras
#using-motion-loras
.md
158_12
You can also leverage the [PEFT](https://github.com/huggingface/peft) backend to combine Motion LoRA's and create more complex animations. First install PEFT with ```shell pip install peft ``` Then you can use the following code to combine Motion LoRAs. ```python import torch from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter from diffusers.utils import export_to_gif # Load the motion adapter adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) # load SD 1.5 based finetuned model model_id = "SG161222/Realistic_Vision_V5.1_noVAE" pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) pipe.load_lora_weights( "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out", ) pipe.load_lora_weights( "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left", ) pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0]) scheduler = DDIMScheduler.from_pretrained( model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", beta_schedule="linear", steps_offset=1, ) pipe.scheduler = scheduler # enable memory savings pipe.enable_vae_slicing() pipe.enable_model_cpu_offload() output = pipe( prompt=( "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " "orange sky, warm lighting, fishing boats, ocean waves seagulls, " "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " "golden hour, coastal landscape, seaside scenery" ), negative_prompt="bad quality, worse quality", num_frames=16, guidance_scale=7.5, num_inference_steps=25, generator=torch.Generator("cpu").manual_seed(42), ) frames = output.frames[0] export_to_gif(frames, "animation.gif") ``` <table> <tr> <td><center> masterpiece, bestquality, sunset. <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-zoom-out-pan-left-lora.gif" alt="masterpiece, bestquality, sunset" style="width: 300px;" /> </center></td> </tr> </table>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#using-motion-loras-with-peft
#using-motion-loras-with-peft
.md
158_13
[FreeInit: Bridging Initialization Gap in Video Diffusion Models](https://arxiv.org/abs/2312.07537) by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. ```python import torch from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler from diffusers.utils import export_to_gif adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") model_id = "SG161222/Realistic_Vision_V5.1_noVAE" pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") pipe.scheduler = DDIMScheduler.from_pretrained( model_id, subfolder="scheduler", beta_schedule="linear", clip_sample=False, timestep_spacing="linspace", steps_offset=1 ) # enable memory savings pipe.enable_vae_slicing() pipe.enable_vae_tiling() # enable FreeInit # Refer to the enable_free_init documentation for a full list of configurable parameters pipe.enable_free_init(method="butterworth", use_fast_sampling=True) # run inference output = pipe( prompt="a panda playing a guitar, on a boat, in the ocean, high quality", negative_prompt="bad quality, worse quality", num_frames=16, guidance_scale=7.5, num_inference_steps=20, generator=torch.Generator("cpu").manual_seed(666), ) # disable FreeInit pipe.disable_free_init() frames = output.frames[0] export_to_gif(frames, "animation.gif") ``` <Tip warning={true}> FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the `num_iters` parameter that is set when enabling it. Setting the `use_fast_sampling` parameter to `True` can improve the overall performance (at the cost of lower quality compared to when `use_fast_sampling=False` but still better results than vanilla video generation models). </Tip> <Tip> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines. </Tip> <table> <tr> <th align=center>Without FreeInit enabled</th> <th align=center>With FreeInit enabled</th> </tr> <tr> <td align=center> panda playing a guitar <br /> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-no-freeinit.gif" alt="panda playing a guitar" style="width: 300px;" /> </td> <td align=center> panda playing a guitar <br/> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-freeinit.gif" alt="panda playing a guitar" style="width: 300px;" /> </td> </tr> </table>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#using-freeinit
#using-freeinit
.md
158_14
[AnimateLCM](https://animatelcm.github.io/) is a motion module checkpoint and an [LCM LoRA](https://huggingface.co/docs/diffusers/using-diffusers/inference_with_lcm_lora) that have been created using a consistency learning strategy that decouples the distillation of the image generation priors and the motion generation priors. ```python import torch from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter from diffusers.utils import export_to_gif adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM") pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter) pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear") pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="sd15_lora_beta.safetensors", adapter_name="lcm-lora") pipe.enable_vae_slicing() pipe.enable_model_cpu_offload() output = pipe( prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", negative_prompt="bad quality, worse quality, low resolution", num_frames=16, guidance_scale=1.5, num_inference_steps=6, generator=torch.Generator("cpu").manual_seed(0), ) frames = output.frames[0] export_to_gif(frames, "animatelcm.gif") ``` <table> <tr> <td><center> A space rocket, 4K. <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatelcm-output.gif" alt="A space rocket, 4K" style="width: 300px;" /> </center></td> </tr> </table> AnimateLCM is also compatible with existing [Motion LoRAs](https://huggingface.co/collections/dn6/animatediff-motion-loras-654cb8ad732b9e3cf4d3c17e). ```python import torch from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter from diffusers.utils import export_to_gif adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM") pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter) pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear") pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="sd15_lora_beta.safetensors", adapter_name="lcm-lora") pipe.load_lora_weights("guoyww/animatediff-motion-lora-tilt-up", adapter_name="tilt-up") pipe.set_adapters(["lcm-lora", "tilt-up"], [1.0, 0.8]) pipe.enable_vae_slicing() pipe.enable_model_cpu_offload() output = pipe( prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", negative_prompt="bad quality, worse quality, low resolution", num_frames=16, guidance_scale=1.5, num_inference_steps=6, generator=torch.Generator("cpu").manual_seed(0), ) frames = output.frames[0] export_to_gif(frames, "animatelcm-motion-lora.gif") ``` <table> <tr> <td><center> A space rocket, 4K. <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatelcm-motion-lora.gif" alt="A space rocket, 4K" style="width: 300px;" /> </center></td> </tr> </table>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#using-animatelcm
#using-animatelcm
.md
158_15
[FreeNoise: Tuning-Free Longer Video Diffusion via Noise Rescheduling](https://arxiv.org/abs/2310.15169) by Haonan Qiu, Menghan Xia, Yong Zhang, Yingqing He, Xintao Wang, Ying Shan, Ziwei Liu. FreeNoise is a sampling mechanism that can generate longer videos with short-video generation models by employing noise-rescheduling, temporal attention over sliding windows, and weighted averaging of latent frames. It also can be used with multiple prompts to allow for interpolated video generations. More details are available in the paper. The currently supported AnimateDiff pipelines that can be used with FreeNoise are: - [`AnimateDiffPipeline`] - [`AnimateDiffControlNetPipeline`] - [`AnimateDiffVideoToVideoPipeline`] - [`AnimateDiffVideoToVideoControlNetPipeline`] In order to use FreeNoise, a single line needs to be added to the inference code after loading your pipelines. ```diff + pipe.enable_free_noise() ``` After this, either a single prompt could be used, or multiple prompts can be passed as a dictionary of integer-string pairs. The integer keys of the dictionary correspond to the frame index at which the influence of that prompt would be maximum. Each frame index should map to a single string prompt. The prompts for intermediate frame indices, that are not passed in the dictionary, are created by interpolating between the frame prompts that are passed. By default, simple linear interpolation is used. However, you can customize this behaviour with a callback to the `prompt_interpolation_callback` parameter when enabling FreeNoise. Full example: ```python import torch from diffusers import AutoencoderKL, AnimateDiffPipeline, LCMScheduler, MotionAdapter from diffusers.utils import export_to_video, load_image # Load pipeline dtype = torch.float16 motion_adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM", torch_dtype=dtype) vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=dtype) pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=motion_adapter, vae=vae, torch_dtype=dtype) pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear") pipe.load_lora_weights( "wangfuyun/AnimateLCM", weight_name="AnimateLCM_sd15_t2v_lora.safetensors", adapter_name="lcm_lora" ) pipe.set_adapters(["lcm_lora"], [0.8]) # Enable FreeNoise for long prompt generation pipe.enable_free_noise(context_length=16, context_stride=4) pipe.to("cuda") # Can be a single prompt, or a dictionary with frame timesteps prompt = { 0: "A caterpillar on a leaf, high quality, photorealistic", 40: "A caterpillar transforming into a cocoon, on a leaf, near flowers, photorealistic", 80: "A cocoon on a leaf, flowers in the backgrond, photorealistic", 120: "A cocoon maturing and a butterfly being born, flowers and leaves visible in the background, photorealistic", 160: "A beautiful butterfly, vibrant colors, sitting on a leaf, flowers in the background, photorealistic", 200: "A beautiful butterfly, flying away in a forest, photorealistic", 240: "A cyberpunk butterfly, neon lights, glowing", } negative_prompt = "bad quality, worst quality, jpeg artifacts" # Run inference output = pipe( prompt=prompt, negative_prompt=negative_prompt, num_frames=256, guidance_scale=2.5, num_inference_steps=10, generator=torch.Generator("cpu").manual_seed(0), ) # Save video frames = output.frames[0] export_to_video(frames, "output.mp4", fps=16) ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#using-freenoise
#using-freenoise
.md
158_16
Since FreeNoise processes multiple frames together, there are parts in the modeling where the memory required exceeds that available on normal consumer GPUs. The main memory bottlenecks that we identified are spatial and temporal attention blocks, upsampling and downsampling blocks, resnet blocks and feed-forward layers. Since most of these blocks operate effectively only on the channel/embedding dimension, one can perform chunked inference across the batch dimensions. The batch dimension in AnimateDiff are either spatial (`[B x F, H x W, C]`) or temporal (`B x H x W, F, C`) in nature (note that it may seem counter-intuitive, but the batch dimension here are correct, because spatial blocks process across the `B x F` dimension while the temporal blocks process across the `B x H x W` dimension). We introduce a `SplitInferenceModule` that makes it easier to chunk across any dimension and perform inference. This saves a lot of memory but comes at the cost of requiring more time for inference. ```diff # Load pipeline and adapters # ... + pipe.enable_free_noise_split_inference() + pipe.unet.enable_forward_chunking(16) ``` The call to `pipe.enable_free_noise_split_inference` method accepts two parameters: `spatial_split_size` (defaults to `256`) and `temporal_split_size` (defaults to `16`). These can be configured based on how much VRAM you have available. A lower split size results in lower memory usage but slower inference, whereas a larger split size results in faster inference at the cost of more memory.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#freenoise-memory-savings
#freenoise-memory-savings
.md
158_17
`diffusers>=0.30.0` supports loading the AnimateDiff checkpoints into the `MotionAdapter` in their original format via `from_single_file` ```python from diffusers import MotionAdapter ckpt_path = "https://huggingface.co/Lightricks/LongAnimateDiff/blob/main/lt_long_mm_32_frames.ckpt" adapter = MotionAdapter.from_single_file(ckpt_path, torch_dtype=torch.float16) pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter) ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#using-fromsinglefile-with-the-motionadapter
#using-fromsinglefile-with-the-motionadapter
.md
158_18
AnimateDiffPipeline Pipeline for text-to-video generation. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer (`CLIPTokenizer`): A [`~transformers.CLIPTokenizer`] to tokenize text. unet ([`UNet2DConditionModel`]): A [`UNet2DConditionModel`] used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter ([`MotionAdapter`]): A [`MotionAdapter`] to be used in combination with `unet` to denoise the encoded video latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#animatediffpipeline
#animatediffpipeline
.md
158_19
AnimateDiffControlNetPipeline Pipeline for text-to-video generation with ControlNet guidance. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer (`CLIPTokenizer`): A [`~transformers.CLIPTokenizer`] to tokenize text. unet ([`UNet2DConditionModel`]): A [`UNet2DConditionModel`] used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter ([`MotionAdapter`]): A [`MotionAdapter`] to be used in combination with `unet` to denoise the encoded video latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#animatediffcontrolnetpipeline
#animatediffcontrolnetpipeline
.md
158_20
AnimateDiffSparseControlNetPipeline Pipeline for controlled text-to-video generation using the method described in [SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models](https://arxiv.org/abs/2311.16933). This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer (`CLIPTokenizer`): A [`~transformers.CLIPTokenizer`] to tokenize text. unet ([`UNet2DConditionModel`]): A [`UNet2DConditionModel`] used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter ([`MotionAdapter`]): A [`MotionAdapter`] to be used in combination with `unet` to denoise the encoded video latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#animatediffsparsecontrolnetpipeline
#animatediffsparsecontrolnetpipeline
.md
158_21
AnimateDiffSDXLPipeline Pipeline for text-to-video generation using Stable Diffusion XL. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files - [`~loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder. Stable Diffusion XL uses the text portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. text_encoder_2 ([` CLIPTextModelWithProjection`]): Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer_2 (`CLIPTokenizer`): Second Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. force_zeros_for_empty_prompt (`bool`, *optional*, defaults to `"True"`): Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of `stabilityai/stable-diffusion-xl-base-1-0`. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#animatediffsdxlpipeline
#animatediffsdxlpipeline
.md
158_22
AnimateDiffVideoToVideoPipeline Pipeline for video-to-video generation. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer (`CLIPTokenizer`): A [`~transformers.CLIPTokenizer`] to tokenize text. unet ([`UNet2DConditionModel`]): A [`UNet2DConditionModel`] used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter ([`MotionAdapter`]): A [`MotionAdapter`] to be used in combination with `unet` to denoise the encoded video latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#animatediffvideotovideopipeline
#animatediffvideotovideopipeline
.md
158_23
AnimateDiffVideoToVideoControlNetPipeline Pipeline for video-to-video generation with ControlNet guidance. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer (`CLIPTokenizer`): A [`~transformers.CLIPTokenizer`] to tokenize text. unet ([`UNet2DConditionModel`]): A [`UNet2DConditionModel`] used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter ([`MotionAdapter`]): A [`MotionAdapter`] to be used in combination with `unet` to denoise the encoded video latents. controlnet ([`ControlNetModel`] or `List[ControlNetModel]` or `Tuple[ControlNetModel]` or `MultiControlNetModel`): Provides additional conditioning to the `unet` during the denoising process. If you set multiple ControlNets as a list, the outputs from each ControlNet are added together to create one combined additional conditioning. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#animatediffvideotovideocontrolnetpipeline
#animatediffvideotovideocontrolnetpipeline
.md
158_24
AnimateDiffPipelineOutput Output class for AnimateDiff pipelines. Args: frames (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]): List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape `(batch_size, num_frames, channels, height, width)`
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/animatediff.md
https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff/#animatediffpipelineoutput
#animatediffpipelineoutput
.md
158_25
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/text_to_video_zero.md
https://huggingface.co/docs/diffusers/en/api/pipelines/text_to_video_zero/
.md
159_0
[Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators](https://huggingface.co/papers/2303.13439) is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, [Zhangyang Wang](https://www.ece.utexas.edu/people/faculty/atlas-wang), Shant Navasardyan, [Humphrey Shi](https://www.humphreyshi.com). Text2Video-Zero enables zero-shot video generation using either: 1. A textual prompt 2. A prompt combined with guidance from poses or edges 3. Video Instruct-Pix2Pix (instruction-guided video editing) Results are temporally consistent and closely follow the guidance and textual prompts. ![teaser-img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/t2v_zero_teaser.png) The abstract from the paper is: *Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data.* You can find additional information about Text2Video-Zero on the [project page](https://text2video-zero.github.io/), [paper](https://arxiv.org/abs/2303.13439), and [original codebase](https://github.com/Picsart-AI-Research/Text2Video-Zero).
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/text_to_video_zero.md
https://huggingface.co/docs/diffusers/en/api/pipelines/text_to_video_zero/#text2video-zero
#text2video-zero
.md
159_1
To generate a video from prompt, run the following Python code: ```python import torch from diffusers import TextToVideoZeroPipeline import imageio model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5" pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") prompt = "A panda is playing guitar on times square" result = pipe(prompt=prompt).images result = [(r * 255).astype("uint8") for r in result] imageio.mimsave("video.mp4", result, fps=4) ``` You can change these parameters in the pipeline call: * Motion field strength (see the [paper](https://arxiv.org/abs/2303.13439), Sect. 3.3.1): * `motion_field_strength_x` and `motion_field_strength_y`. Default: `motion_field_strength_x=12`, `motion_field_strength_y=12` * `T` and `T'` (see the [paper](https://arxiv.org/abs/2303.13439), Sect. 3.3.1) * `t0` and `t1` in the range `{0, ..., num_inference_steps}`. Default: `t0=45`, `t1=48` * Video length: * `video_length`, the number of frames video_length to be generated. Default: `video_length=8` We can also generate longer videos by doing the processing in a chunk-by-chunk manner: ```python import torch from diffusers import TextToVideoZeroPipeline import numpy as np model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5" pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") seed = 0 video_length = 24 #24 ÷ 4fps = 6 seconds chunk_size = 8 prompt = "A panda is playing guitar on times square" # Generate the video chunk-by-chunk result = [] chunk_ids = np.arange(0, video_length, chunk_size - 1) generator = torch.Generator(device="cuda") for i in range(len(chunk_ids)): print(f"Processing chunk {i + 1} / {len(chunk_ids)}") ch_start = chunk_ids[i] ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] # Attach the first frame for Cross Frame Attention frame_ids = [0] + list(range(ch_start, ch_end)) # Fix the seed for the temporal consistency generator.manual_seed(seed) output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) result.append(output.images[1:]) # Concatenate chunks and save result = np.concatenate(result) result = [(r * 255).astype("uint8") for r in result] imageio.mimsave("video.mp4", result, fps=4) ``` - #### SDXL Support In order to use the SDXL model when generating a video from prompt, use the `TextToVideoZeroSDXLPipeline` pipeline: ```python import torch from diffusers import TextToVideoZeroSDXLPipeline model_id = "stabilityai/stable-diffusion-xl-base-1.0" pipe = TextToVideoZeroSDXLPipeline.from_pretrained( model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True ).to("cuda") ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/text_to_video_zero.md
https://huggingface.co/docs/diffusers/en/api/pipelines/text_to_video_zero/#text-to-video
#text-to-video
.md
159_2
To generate a video from prompt with additional pose control 1. Download a demo video ```python from huggingface_hub import hf_hub_download filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" repo_id = "PAIR/Text2Video-Zero" video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) ``` 2. Read video containing extracted pose images ```python from PIL import Image import imageio reader = imageio.get_reader(video_path, "ffmpeg") frame_count = 8 pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] ``` To extract pose from actual video, read [ControlNet documentation](controlnet). 3. Run `StableDiffusionControlNetPipeline` with our custom attention processor ```python import torch from diffusers import StableDiffusionControlNetPipeline, ControlNetModel from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5" controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained( model_id, controlnet=controlnet, torch_dtype=torch.float16 ).to("cuda") # Set the attention processor pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) # fix latents for all frames latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) prompt = "Darth Vader dancing in a desert" result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images imageio.mimsave("video.mp4", result, fps=4) ``` - #### SDXL Support Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL: ```python import torch from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0' model_id = 'stabilityai/stable-diffusion-xl-base-1.0' controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained( model_id, controlnet=controlnet, torch_dtype=torch.float16 ).to('cuda') # Set the attention processor pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) # fix latents for all frames latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) prompt = "Darth Vader dancing in a desert" result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images imageio.mimsave("video.mp4", result, fps=4) ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/text_to_video_zero.md
https://huggingface.co/docs/diffusers/en/api/pipelines/text_to_video_zero/#text-to-video-with-pose-control
#text-to-video-with-pose-control
.md
159_3
To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using [Canny edge ControlNet model](https://huggingface.co/lllyasviel/sd-controlnet-canny).
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/text_to_video_zero.md
https://huggingface.co/docs/diffusers/en/api/pipelines/text_to_video_zero/#text-to-video-with-edge-control
#text-to-video-with-edge-control
.md
159_4
To perform text-guided video editing (with [InstructPix2Pix](pix2pix)): 1. Download a demo video ```python from huggingface_hub import hf_hub_download filename = "__assets__/pix2pix video/camel.mp4" repo_id = "PAIR/Text2Video-Zero" video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) ``` 2. Read video from path ```python from PIL import Image import imageio reader = imageio.get_reader(video_path, "ffmpeg") frame_count = 8 video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] ``` 3. Run `StableDiffusionInstructPix2PixPipeline` with our custom attention processor ```python import torch from diffusers import StableDiffusionInstructPix2PixPipeline from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor model_id = "timbrooks/instruct-pix2pix" pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) prompt = "make it Van Gogh Starry Night style" result = pipe(prompt=[prompt] * len(video), image=video).images imageio.mimsave("edited_video.mp4", result, fps=4) ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/text_to_video_zero.md
https://huggingface.co/docs/diffusers/en/api/pipelines/text_to_video_zero/#video-instruct-pix2pix
#video-instruct-pix2pix
.md
159_5
Methods **Text-To-Video**, **Text-To-Video with Pose Control** and **Text-To-Video with Edge Control** can run with custom [DreamBooth](../../training/dreambooth) models, as shown below for [Canny edge ControlNet model](https://huggingface.co/lllyasviel/sd-controlnet-canny) and [Avatar style DreamBooth](https://huggingface.co/PAIR/text2video-zero-controlnet-canny-avatar) model: 1. Download a demo video ```python from huggingface_hub import hf_hub_download filename = "__assets__/canny_videos_mp4/girl_turning.mp4" repo_id = "PAIR/Text2Video-Zero" video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) ``` 2. Read video from path ```python from PIL import Image import imageio reader = imageio.get_reader(video_path, "ffmpeg") frame_count = 8 canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] ``` 3. Run `StableDiffusionControlNetPipeline` with custom trained DreamBooth model ```python import torch from diffusers import StableDiffusionControlNetPipeline, ControlNetModel from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor # set model id to custom model model_id = "PAIR/text2video-zero-controlnet-canny-avatar" controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained( model_id, controlnet=controlnet, torch_dtype=torch.float16 ).to("cuda") # Set the attention processor pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) # fix latents for all frames latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1) prompt = "oil painting of a beautiful girl avatar style" result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images imageio.mimsave("video.mp4", result, fps=4) ``` You can filter out some available DreamBooth-trained models with [this link](https://huggingface.co/models?search=dreambooth). <Tip> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines. </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/text_to_video_zero.md
https://huggingface.co/docs/diffusers/en/api/pipelines/text_to_video_zero/#dreambooth-specialization
#dreambooth-specialization
.md
159_6
TextToVideoZeroPipeline Pipeline for zero-shot text-to-video generation using Stable Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer (`CLIPTokenizer`): A [`~transformers.CLIPTokenizer`] to tokenize text. unet ([`UNet2DConditionModel`]): A [`UNet3DConditionModel`] to denoise the encoded video latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for more details about a model's potential harms. feature_extractor ([`CLIPImageProcessor`]): A [`CLIPImageProcessor`] to extract features from generated images; used as inputs to the `safety_checker`. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/text_to_video_zero.md
https://huggingface.co/docs/diffusers/en/api/pipelines/text_to_video_zero/#texttovideozeropipeline
#texttovideozeropipeline
.md
159_7
TextToVideoZeroSDXLPipeline Pipeline for zero-shot text-to-video generation using Stable Diffusion XL. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder. Stable Diffusion XL uses the text portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. text_encoder_2 ([` CLIPTextModelWithProjection`]): Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer_2 (`CLIPTokenizer`): Second Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/text_to_video_zero.md
https://huggingface.co/docs/diffusers/en/api/pipelines/text_to_video_zero/#texttovideozerosdxlpipeline
#texttovideozerosdxlpipeline
.md
159_8
TextToVideoPipelineOutput Output class for zero-shot text-to-video pipeline. Args: images (`[List[PIL.Image.Image]`, `np.ndarray`]): List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, num_channels)`. nsfw_content_detected (`[List[bool]]`): List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or `None` if safety checking could not be performed.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/text_to_video_zero.md
https://huggingface.co/docs/diffusers/en/api/pipelines/text_to_video_zero/#texttovideopipelineoutput
#texttovideopipelineoutput
.md
159_9
<!--Copyright 2024 The HuggingFace Team and Tencent Hunyuan Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/controlnet_hunyuandit.md
https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_hunyuandit/
.md
160_0
HunyuanDiTControlNetPipeline is an implementation of ControlNet for [Hunyuan-DiT](https://arxiv.org/abs/2405.08748). ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Hunyuan-DiT generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: *We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.* This code is implemented by Tencent Hunyuan Team. You can find pre-trained checkpoints for Hunyuan-DiT ControlNets on [Tencent Hunyuan](https://huggingface.co/Tencent-Hunyuan). <Tip> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines. </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/controlnet_hunyuandit.md
https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_hunyuandit/#controlnet-with-hunyuan-dit
#controlnet-with-hunyuan-dit
.md
160_1
HunyuanDiTControlNetPipeline Pipeline for English/Chinese-to-image generation using HunyuanDiT. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) HunyuanDiT uses two text encoders: [mT5](https://huggingface.co/google/mt5-base) and [bilingual CLIP](fine-tuned by ourselves) Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. We use `sdxl-vae-fp16-fix`. text_encoder (Optional[`~transformers.BertModel`, `~transformers.CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). HunyuanDiT uses a fine-tuned [bilingual CLIP]. tokenizer (Optional[`~transformers.BertTokenizer`, `~transformers.CLIPTokenizer`]): A `BertTokenizer` or `CLIPTokenizer` to tokenize text. transformer ([`HunyuanDiT2DModel`]): The HunyuanDiT model designed by Tencent Hunyuan. text_encoder_2 (`T5EncoderModel`): The mT5 embedder. Specifically, it is 't5-v1_1-xxl'. tokenizer_2 (`MT5Tokenizer`): The tokenizer for the mT5 embedder. scheduler ([`DDPMScheduler`]): A scheduler to be used in combination with HunyuanDiT to denoise the encoded image latents. controlnet ([`HunyuanDiT2DControlNetModel`] or `List[HunyuanDiT2DControlNetModel]` or [`HunyuanDiT2DControlNetModel`]): Provides additional conditioning to the `unet` during the denoising process. If you set multiple ControlNets as a list, the outputs from each ControlNet are added together to create one combined additional conditioning. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/controlnet_hunyuandit.md
https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_hunyuandit/#hunyuanditcontrolnetpipeline
#hunyuanditcontrolnetpipeline
.md
160_2
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/kandinsky3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/kandinsky3/
.md
161_0
Kandinsky 3 is created by [Vladimir Arkhipkin](https://github.com/oriBetelgeuse),[Anastasia Maltseva](https://github.com/NastyaMittseva),[Igor Pavlov](https://github.com/boomb0om),[Andrei Filatov](https://github.com/anvilarth),[Arseniy Shakhmatov](https://github.com/cene555),[Andrey Kuznetsov](https://github.com/kuznetsoffandrey),[Denis Dimitrov](https://github.com/denndimitrov), [Zein Shaheen](https://github.com/zeinsh) The description from it's GitHub page: *Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively.* Its architecture includes 3 main components: 1. [FLAN-UL2](https://huggingface.co/google/flan-ul2), which is an encoder decoder model based on the T5 architecture. 2. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. 3. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at [ai-forever/Kandinsky-3](https://github.com/ai-forever/Kandinsky-3). <Tip> Check out the [Kandinsky Community](https://huggingface.co/kandinsky-community) organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. </Tip> <Tip> Make sure to check out the schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines. </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/kandinsky3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/kandinsky3/#kandinsky-3
#kandinsky-3
.md
161_1
Kandinsky3Pipeline - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/kandinsky3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/kandinsky3/#kandinsky3pipeline
#kandinsky3pipeline
.md
161_2
Kandinsky3Img2ImgPipeline - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/kandinsky3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/kandinsky3/#kandinsky3img2imgpipeline
#kandinsky3img2imgpipeline
.md
161_3
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/controlnet_sd3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_sd3/
.md
162_0
StableDiffusion3ControlNetPipeline is an implementation of ControlNet for Stable Diffusion 3. ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: *We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.* This controlnet code is mainly implemented by [The InstantX Team](https://huggingface.co/InstantX). The inpainting-related code was developed by [The Alimama Creative Team](https://huggingface.co/alimama-creative). You can find pre-trained checkpoints for SD3-ControlNet in the table below: | ControlNet type | Developer | Link | | -------- | ---------- | ---- | | Canny | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Canny) | | Depth | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Depth) | | Pose | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Pose) | | Tile | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Tile) | | Inpainting | [The AlimamaCreative Team](https://huggingface.co/alimama-creative) | [link](https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting) | <Tip> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines. </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/controlnet_sd3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_sd3/#controlnet-with-stable-diffusion-3
#controlnet-with-stable-diffusion-3
.md
162_1
StableDiffusion3ControlNetPipeline Args: transformer ([`SD3Transformer2DModel`]): Conditional Transformer (MMDiT) architecture to denoise the encoded image latents. scheduler ([`FlowMatchEulerDiscreteScheduler`]): A scheduler to be used in combination with `transformer` to denoise the encoded image latents. vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModelWithProjection`]): [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant, with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size` as its dimension. text_encoder_2 ([`CLIPTextModelWithProjection`]): [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) variant. text_encoder_3 ([`T5EncoderModel`]): Frozen text-encoder. Stable Diffusion 3 uses [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer_2 (`CLIPTokenizer`): Second Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer_3 (`T5TokenizerFast`): Tokenizer of class [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer). controlnet ([`SD3ControlNetModel`] or `List[SD3ControlNetModel]` or [`SD3MultiControlNetModel`]): Provides additional conditioning to the `unet` during the denoising process. If you set multiple ControlNets as a list, the outputs from each ControlNet are added together to create one combined additional conditioning. image_encoder (`PreTrainedModel`, *optional*): Pre-trained Vision Model for IP Adapter. feature_extractor (`BaseImageProcessor`, *optional*): Image processor for IP Adapter. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/controlnet_sd3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_sd3/#stablediffusion3controlnetpipeline
#stablediffusion3controlnetpipeline
.md
162_2
StableDiffusion3ControlNetInpaintingPipeline Args: transformer ([`SD3Transformer2DModel`]): Conditional Transformer (MMDiT) architecture to denoise the encoded image latents. scheduler ([`FlowMatchEulerDiscreteScheduler`]): A scheduler to be used in combination with `transformer` to denoise the encoded image latents. vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModelWithProjection`]): [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant, with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size` as its dimension. text_encoder_2 ([`CLIPTextModelWithProjection`]): [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) variant. text_encoder_3 ([`T5EncoderModel`]): Frozen text-encoder. Stable Diffusion 3 uses [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer_2 (`CLIPTokenizer`): Second Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer_3 (`T5TokenizerFast`): Tokenizer of class [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer). controlnet ([`SD3ControlNetModel`] or `List[SD3ControlNetModel]` or [`SD3MultiControlNetModel`]): Provides additional conditioning to the `transformer` during the denoising process. If you set multiple ControlNets as a list, the outputs from each ControlNet are added together to create one combined additional conditioning. image_encoder (`PreTrainedModel`, *optional*): Pre-trained Vision Model for IP Adapter. feature_extractor (`BaseImageProcessor`, *optional*): Image processor for IP Adapter. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/controlnet_sd3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_sd3/#stablediffusion3controlnetinpaintingpipeline
#stablediffusion3controlnetinpaintingpipeline
.md
162_3
StableDiffusion3PipelineOutput Output class for Stable Diffusion pipelines. Args: images (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/controlnet_sd3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_sd3/#stablediffusion3pipelineoutput
#stablediffusion3pipelineoutput
.md
162_4
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/auto_pipeline.md
https://huggingface.co/docs/diffusers/en/api/pipelines/auto_pipeline/
.md
163_0
The `AutoPipeline` is designed to make it easy to load a checkpoint for a task without needing to know the specific pipeline class. Based on the task, the `AutoPipeline` automatically retrieves the correct pipeline class from the checkpoint `model_index.json` file. > [!TIP] > Check out the [AutoPipeline](../../tutorials/autopipeline) tutorial to learn how to use this API!
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/auto_pipeline.md
https://huggingface.co/docs/diffusers/en/api/pipelines/auto_pipeline/#autopipeline
#autopipeline
.md
163_1
AutoPipelineForText2Image [`AutoPipelineForText2Image`] is a generic pipeline class that instantiates a text-to-image pipeline class. The specific underlying pipeline class is automatically selected from either the [`~AutoPipelineForText2Image.from_pretrained`] or [`~AutoPipelineForText2Image.from_pipe`] methods. This class cannot be instantiated using `__init__()` (throws an error). Class attributes: - **config_name** (`str`) -- The configuration filename that stores the class and module names of all the diffusion pipeline's components. - all - from_pretrained - from_pipe
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/auto_pipeline.md
https://huggingface.co/docs/diffusers/en/api/pipelines/auto_pipeline/#autopipelinefortext2image
#autopipelinefortext2image
.md
163_2
AutoPipelineForImage2Image [`AutoPipelineForImage2Image`] is a generic pipeline class that instantiates an image-to-image pipeline class. The specific underlying pipeline class is automatically selected from either the [`~AutoPipelineForImage2Image.from_pretrained`] or [`~AutoPipelineForImage2Image.from_pipe`] methods. This class cannot be instantiated using `__init__()` (throws an error). Class attributes: - **config_name** (`str`) -- The configuration filename that stores the class and module names of all the diffusion pipeline's components. - all - from_pretrained - from_pipe
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/auto_pipeline.md
https://huggingface.co/docs/diffusers/en/api/pipelines/auto_pipeline/#autopipelineforimage2image
#autopipelineforimage2image
.md
163_3
AutoPipelineForInpainting [`AutoPipelineForInpainting`] is a generic pipeline class that instantiates an inpainting pipeline class. The specific underlying pipeline class is automatically selected from either the [`~AutoPipelineForInpainting.from_pretrained`] or [`~AutoPipelineForInpainting.from_pipe`] methods. This class cannot be instantiated using `__init__()` (throws an error). Class attributes: - **config_name** (`str`) -- The configuration filename that stores the class and module names of all the diffusion pipeline's components. - all - from_pretrained - from_pipe
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/auto_pipeline.md
https://huggingface.co/docs/diffusers/en/api/pipelines/auto_pipeline/#autopipelineforinpainting
#autopipelineforinpainting
.md
163_4
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/overview.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview/
.md
164_0
Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI [announcement](https://stability.ai/blog/stable-diffusion-announcement) and our own [blog post](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) for more technical details. You can find the original codebase for Stable Diffusion v1.0 at [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) and Stable Diffusion v2.0 at [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion) as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: <div class="flex justify-center"> <div class="rounded-xl border border-gray-200"> <table class="min-w-full divide-y-2 divide-gray-200 bg-white text-sm"> <thead> <tr> <th class="px-4 py-2 font-medium text-gray-900 text-left"> Pipeline </th> <th class="px-4 py-2 font-medium text-gray-900 text-left"> Supported tasks </th> <th class="px-4 py-2 font-medium text-gray-900 text-left"> 🤗 Space </th> </tr> </thead> <tbody class="divide-y divide-gray-200"> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./text2img">StableDiffusion</a> </td> <td class="px-4 py-2 text-gray-700">text-to-image</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/stabilityai/stable-diffusion"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./img2img">StableDiffusionImg2Img</a> </td> <td class="px-4 py-2 text-gray-700">image-to-image</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/huggingface/diffuse-the-rest"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./inpaint">StableDiffusionInpaint</a> </td> <td class="px-4 py-2 text-gray-700">inpainting</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/runwayml/stable-diffusion-inpainting"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./depth2img">StableDiffusionDepth2Img</a> </td> <td class="px-4 py-2 text-gray-700">depth-to-image</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/radames/stable-diffusion-depth2img"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./image_variation">StableDiffusionImageVariation</a> </td> <td class="px-4 py-2 text-gray-700">image variation</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/lambdalabs/stable-diffusion-image-variations"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./stable_diffusion_safe">StableDiffusionPipelineSafe</a> </td> <td class="px-4 py-2 text-gray-700">filtered text-to-image</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./stable_diffusion_2">StableDiffusion2</a> </td> <td class="px-4 py-2 text-gray-700">text-to-image, inpainting, depth-to-image, super-resolution</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/stabilityai/stable-diffusion"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./stable_diffusion_xl">StableDiffusionXL</a> </td> <td class="px-4 py-2 text-gray-700">text-to-image, image-to-image</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/RamAnanth1/stable-diffusion-xl"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./latent_upscale">StableDiffusionLatentUpscale</a> </td> <td class="px-4 py-2 text-gray-700">super-resolution</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/huggingface-projects/stable-diffusion-latent-upscaler"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./upscale">StableDiffusionUpscale</a> </td> <td class="px-4 py-2 text-gray-700">super-resolution</td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./ldm3d_diffusion">StableDiffusionLDM3D</a> </td> <td class="px-4 py-2 text-gray-700">text-to-rgb, text-to-depth, text-to-pano</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/r23/ldm3d-space"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./ldm3d_diffusion">StableDiffusionUpscaleLDM3D</a> </td> <td class="px-4 py-2 text-gray-700">ldm3d super-resolution</td> </tr> </tbody> </table> </div> </div>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/overview.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview/#stable-diffusion-pipelines
#stable-diffusion-pipelines
.md
164_1
To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/overview.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview/#tips
#tips
.md
164_2
[`StableDiffusionPipeline`] uses the [`PNDMScheduler`] by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the [`EulerDiscreteScheduler`] instead of the default: ```py from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) # or euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/overview.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview/#explore-tradeoff-between-speed-and-quality
#explore-tradeoff-between-speed-and-quality
.md
164_3
To save memory and use the same components across multiple pipelines, use the `.components` method to avoid loading weights into RAM more than once. ```py from diffusers import ( StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, StableDiffusionInpaintPipeline, ) text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") img2img = StableDiffusionImg2ImgPipeline(**text2img.components) inpaint = StableDiffusionInpaintPipeline(**text2img.components) # now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/overview.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview/#reuse-pipeline-components-to-save-memory
#reuse-pipeline-components-to-save-memory
.md
164_4
The Stable Diffusion pipelines are automatically supported in [Gradio](https://github.com/gradio-app/gradio/), a library that makes creating beautiful and user-friendly machine learning apps on the web a breeze. First, make sure you have Gradio installed: ```sh pip install -U gradio ``` Then, create a web demo around any Stable Diffusion-based pipeline. For example, you can create an image generation pipeline in a single line of code with Gradio's [`Interface.from_pipeline`](https://www.gradio.app/docs/interface#interface-from-pipeline) function: ```py from diffusers import StableDiffusionPipeline import gradio as gr pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") gr.Interface.from_pipeline(pipe).launch() ``` which opens an intuitive drag-and-drop interface in your browser: ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gradio-panda.png) Similarly, you could create a demo for an image-to-image pipeline with: ```py from diffusers import StableDiffusionImg2ImgPipeline import gradio as gr pipe = StableDiffusionImg2ImgPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5") gr.Interface.from_pipeline(pipe).launch() ``` By default, the web demo runs on a local server. If you'd like to share it with others, you can generate a temporary public link by setting `share=True` in `launch()`. Or, you can host your demo on [Hugging Face Spaces](https://huggingface.co/spaces)https://huggingface.co/spaces for a permanent link.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/overview.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview/#create-web-demos-using-gradio
#create-web-demos-using-gradio
.md
164_5
<!--Copyright 2024 The Intel Labs Team Authors and HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/ldm3d_diffusion/
.md
165_0
LDM3D was proposed in [LDM3D: Latent Diffusion Model for 3D](https://huggingface.co/papers/2305.10853) by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as [Stable Diffusion](./overview) which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps. Two checkpoints are available for use: - [ldm3d-original](https://huggingface.co/Intel/ldm3d). The original checkpoint used in the [paper](https://arxiv.org/pdf/2305.10853.pdf) - [ldm3d-4c](https://huggingface.co/Intel/ldm3d-4c). The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images. The abstract from the paper is: *This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at [this url](https://t.ly/tdi2).* <Tip> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/ldm3d_diffusion/#text-to-rgb-depth
#text-to-rgb-depth
.md
165_1
StableDiffusionLDM3DPipeline Pipeline for text-to-image and 3D generation using LDM3D. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder ([`~transformers.CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer ([`~transformers.CLIPTokenizer`]): A `CLIPTokenizer` to tokenize text. unet ([`UNet2DConditionModel`]): A `UNet2DConditionModel` to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for more details about a model's potential harms. feature_extractor ([`~transformers.CLIPImageProcessor`]): A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/ldm3d_diffusion/#stablediffusionldm3dpipeline
#stablediffusionldm3dpipeline
.md
165_2
LDM3DPipelineOutput Output class for Stable Diffusion pipelines. Args: rgb (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, num_channels)`. depth (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, num_channels)`. nsfw_content_detected (`List[bool]`) List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or `None` if safety checking could not be performed. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/ldm3d_diffusion/#ldm3dpipelineoutput
#ldm3dpipelineoutput
.md
165_3
[LDM3D-VR](https://arxiv.org/pdf/2311.03226.pdf) is an extended version of LDM3D. The abstract from the paper is: *Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods* Two checkpoints are available for use: - [ldm3d-pano](https://huggingface.co/Intel/ldm3d-pano). This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used. - [ldm3d-sr](https://huggingface.co/Intel/ldm3d-sr). This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/ldm3d_diffusion/#upscaler
#upscaler
.md
165_4
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/sdxl_turbo.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/sdxl_turbo/
.md
166_0
Stable Diffusion XL (SDXL) Turbo was proposed in [Adversarial Diffusion Distillation](https://stability.ai/research/adversarial-diffusion-distillation) by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: *We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models.*
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/sdxl_turbo.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/sdxl_turbo/#sdxl-turbo
#sdxl-turbo
.md
166_1
- SDXL Turbo uses the exact same architecture as [SDXL](./stable_diffusion_xl), which means it also has the same API. Please refer to the [SDXL](./stable_diffusion_xl) API reference for more details. - SDXL Turbo should disable guidance scale by setting `guidance_scale=0.0`. - SDXL Turbo should use `timestep_spacing='trailing'` for the scheduler and use between 1 and 4 steps. - SDXL Turbo has been trained to generate images of size 512x512. - SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the [official model card](https://huggingface.co/stabilityai/sdxl-turbo) to learn more. <Tip> To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the [SDXL Turbo](../../../using-diffusers/sdxl_turbo) guide. Check out the [Stability AI](https://huggingface.co/stabilityai) Hub organization for the official base and refiner model checkpoints! </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/sdxl_turbo.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/sdxl_turbo/#tips
#tips
.md
166_2
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/k_diffusion.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/k_diffusion/
.md
167_0
[k-diffusion](https://github.com/crowsonkb/k-diffusion) is a popular library created by [Katherine Crowson](https://github.com/crowsonkb/). We provide `StableDiffusionKDiffusionPipeline` and `StableDiffusionXLKDiffusionPipeline` that allow you to run Stable DIffusion with samplers from k-diffusion. Note that most the samplers from k-diffusion are implemented in Diffusers and we recommend using existing schedulers. You can find a mapping between k-diffusion samplers and schedulers in Diffusers [here](https://huggingface.co/docs/diffusers/api/schedulers/overview)
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/k_diffusion.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/k_diffusion/#k-diffusion
#k-diffusion
.md
167_1
StableDiffusionKDiffusionPipeline
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/k_diffusion.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/k_diffusion/#stablediffusionkdiffusionpipeline
#stablediffusionkdiffusionpipeline
.md
167_2
StableDiffusionXLKDiffusionPipeline
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/k_diffusion.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/k_diffusion/#stablediffusionxlkdiffusionpipeline
#stablediffusionxlkdiffusionpipeline
.md
167_3
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/image_variation.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/image_variation/
.md
168_0
The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by [Justin Pinkney](https://www.justinpinkney.com/) from [Lambda](https://lambdalabs.com/). The original codebase can be found at [LambdaLabsML/lambda-diffusers](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) and additional official checkpoints for image variation can be found at [lambdalabs/sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers). <Tip> Make sure to check out the Stable Diffusion [Tips](./overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/image_variation.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/image_variation/#image-variation
#image-variation
.md
168_1
StableDiffusionImageVariationPipeline Pipeline to generate image variations from an input image using Stable Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder ([`~transformers.CLIPVisionModelWithProjection`]): Frozen CLIP image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). text_encoder ([`~transformers.CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer ([`~transformers.CLIPTokenizer`]): A `CLIPTokenizer` to tokenize text. unet ([`UNet2DConditionModel`]): A `UNet2DConditionModel` to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for more details about a model's potential harms. feature_extractor ([`~transformers.CLIPImageProcessor`]): A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. - all - __call__ - enable_attention_slicing - disable_attention_slicing - enable_xformers_memory_efficient_attention - disable_xformers_memory_efficient_attention
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/image_variation.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/image_variation/#stablediffusionimagevariationpipeline
#stablediffusionimagevariationpipeline
.md
168_2
StableDiffusionPipelineOutput Output class for Stable Diffusion pipelines. Args: images (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, num_channels)`. nsfw_content_detected (`List[bool]`) List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or `None` if safety checking could not be performed.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/image_variation.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/image_variation/#stablediffusionpipelineoutput
#stablediffusionpipelineoutput
.md
168_3
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/text2img/
.md
169_0
The Stable Diffusion model was created by researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [Runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionPipeline`] is capable of generating photorealistic images given any text input. It's trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: *By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion.* <Tip> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations! </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/text2img/#text-to-image
#text-to-image
.md
169_1
StableDiffusionPipeline Pipeline for text-to-image generation using Stable Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder ([`~transformers.CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer ([`~transformers.CLIPTokenizer`]): A `CLIPTokenizer` to tokenize text. unet ([`UNet2DConditionModel`]): A `UNet2DConditionModel` to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for more details about a model's potential harms. feature_extractor ([`~transformers.CLIPImageProcessor`]): A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. - all - __call__ - enable_attention_slicing - disable_attention_slicing - enable_vae_slicing - disable_vae_slicing - enable_xformers_memory_efficient_attention - disable_xformers_memory_efficient_attention - enable_vae_tiling - disable_vae_tiling - load_textual_inversion - from_single_file - load_lora_weights - save_lora_weights
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/text2img/#stablediffusionpipeline
#stablediffusionpipeline
.md
169_2
StableDiffusionPipelineOutput Output class for Stable Diffusion pipelines. Args: images (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, num_channels)`. nsfw_content_detected (`List[bool]`) List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or `None` if safety checking could not be performed.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/text2img/#stablediffusionpipelineoutput
#stablediffusionpipelineoutput
.md
169_3
FlaxStableDiffusionPipeline - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/text2img/#flaxstablediffusionpipeline
#flaxstablediffusionpipeline
.md
169_4
[[autodoc]] FlaxStableDiffusionPipelineOutput: module diffusers.pipelines.stable_diffusion has no attribute FlaxStableDiffusionPipelineOutput
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/text2img/#flaxstablediffusionpipelineoutput
#flaxstablediffusionpipelineoutput
.md
169_5
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/latent_upscale.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/latent_upscale/
.md
170_0
The Stable Diffusion latent upscaler model was created by [Katherine Crowson](https://github.com/crowsonkb/k-diffusion) in collaboration with [Stability AI](https://stability.ai/). It is used to enhance the output image resolution by a factor of 2 (see this demo [notebook](https://colab.research.google.com/drive/1o1qYJcFeywzCIdkfKJy7cTpgZTCM2EI4) for a demonstration of the original implementation). <Tip> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations! </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/latent_upscale.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/latent_upscale/#latent-upscaler
#latent-upscaler
.md
170_1
StableDiffusionLatentUpscalePipeline Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder ([`~transformers.CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer ([`~transformers.CLIPTokenizer`]): A `CLIPTokenizer` to tokenize text. unet ([`UNet2DConditionModel`]): A `UNet2DConditionModel` to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A [`EulerDiscreteScheduler`] to be used in combination with `unet` to denoise the encoded image latents. - all - __call__ - enable_sequential_cpu_offload - enable_attention_slicing - disable_attention_slicing - enable_xformers_memory_efficient_attention - disable_xformers_memory_efficient_attention
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/latent_upscale.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/latent_upscale/#stablediffusionlatentupscalepipeline
#stablediffusionlatentupscalepipeline
.md
170_2
StableDiffusionPipelineOutput Output class for Stable Diffusion pipelines. Args: images (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, num_channels)`. nsfw_content_detected (`List[bool]`) List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or `None` if safety checking could not be performed.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/latent_upscale.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/latent_upscale/#stablediffusionpipelineoutput
#stablediffusionpipelineoutput
.md
170_3
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_2/
.md
171_0
Stable Diffusion 2 is a text-to-image _latent diffusion_ model built upon the work of the original [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release), and it was led by Robin Rombach and Katherine Crowson from [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). *The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. These models are trained on an aesthetic subset of the [LAION-5B dataset](https://laion.ai/blog/laion-5b/) created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using [LAION’s NSFW filter](https://openreview.net/forum?id=M3Y74vmsMcY).* For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official [announcement post](https://stability.ai/blog/stable-diffusion-v2-release). The architecture of Stable Diffusion 2 is more or less identical to the original [Stable Diffusion model](./text2img) so check out it's API documentation for how to use Stable Diffusion 2. We recommend using the [`DPMSolverMultistepScheduler`] as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: | Task | Repository | |-------------------------|---------------------------------------------------------------------------------------------------------------| | text-to-image (512x512) | [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) | | text-to-image (768x768) | [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) | | inpainting | [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) | | super-resolution | [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) | | depth-to-image | [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth) | Here are some examples for how to use Stable Diffusion 2 for each task: <Tip> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations! </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_2/#stable-diffusion-2
#stable-diffusion-2
.md
171_1
```py from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler import torch repo_id = "stabilityai/stable-diffusion-2-base" pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, variant="fp16") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "High quality photo of an astronaut riding a horse in space" image = pipe(prompt, num_inference_steps=25).images[0] image ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_2/#text-to-image
#text-to-image
.md
171_2
```py import torch from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler from diffusers.utils import load_image, make_image_grid img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = load_image(img_url).resize((512, 512)) mask_image = load_image(mask_url).resize((512, 512)) repo_id = "stabilityai/stable-diffusion-2-inpainting" pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, variant="fp16") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "Face of a yellow cat, high resolution, sitting on a park bench" image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] make_image_grid([init_image, mask_image, image], rows=1, cols=3) ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_2/#inpainting
#inpainting
.md
171_3
```py from diffusers import StableDiffusionUpscalePipeline from diffusers.utils import load_image, make_image_grid import torch # load model and scheduler model_id = "stabilityai/stable-diffusion-x4-upscaler" pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipeline = pipeline.to("cuda") # let's download an image url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" low_res_img = load_image(url) low_res_img = low_res_img.resize((128, 128)) prompt = "a white cat" upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2) ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_2/#super-resolution
#super-resolution
.md
171_4
```py import torch from diffusers import StableDiffusionDepth2ImgPipeline from diffusers.utils import load_image, make_image_grid pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-depth", torch_dtype=torch.float16, ).to("cuda") url = "http://images.cocodataset.org/val2017/000000039769.jpg" init_image = load_image(url) prompt = "two tigers" negative_prompt = "bad, deformed, ugly, bad anotomy" image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] make_image_grid([init_image, image], rows=1, cols=2) ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_2/#depth-to-image
#depth-to-image
.md
171_5
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/
.md
172_0
Stable Diffusion 3 (SD3) was proposed in [Scaling Rectified Flow Transformers for High-Resolution Image Synthesis](https://arxiv.org/pdf/2403.03206.pdf) by Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Muller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, Kyle Lacey, Alex Goodwin, Yannik Marek, and Robin Rombach. The abstract from the paper is: *Diffusion models create data from noise by inverting the forward paths of data towards noise and have emerged as a powerful generative modeling technique for high-dimensional, perceptual data such as images and videos. Rectified flow is a recent generative model formulation that connects data and noise in a straight line. Despite its better theoretical properties and conceptual simplicity, it is not yet decisively established as standard practice. In this work, we improve existing noise sampling techniques for training rectified flow models by biasing them towards perceptually relevant scales. Through a large-scale study, we demonstrate the superior performance of this approach compared to established diffusion formulations for high-resolution text-to-image synthesis. Additionally, we present a novel transformer-based architecture for text-to-image generation that uses separate weights for the two modalities and enables a bidirectional flow of information between image and text tokens, improving text comprehension typography, and human preference ratings. We demonstrate that this architecture follows predictable scaling trends and correlates lower validation loss to improved text-to-image synthesis as measured by various metrics and human evaluations.*
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#stable-diffusion-3
#stable-diffusion-3
.md
172_1
_As the model is gated, before using it with diffusers you first need to go to the [Stable Diffusion 3 Medium Hugging Face page](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers), fill in the form and accept the gate. Once you are in, you need to login so that your system knows you’ve accepted the gate._ Use the command below to log in: ```bash huggingface-cli login ``` <Tip> The SD3 pipeline uses three text encoders to generate an image. Model offloading is necessary in order for it to run on most commodity hardware. Please use the `torch.float16` data type for additional memory savings. </Tip> ```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16) pipe.to("cuda") image = pipe( prompt="a photo of a cat holding a sign that says hello world", negative_prompt="", num_inference_steps=28, height=1024, width=1024, guidance_scale=7.0, ).images[0] image.save("sd3_hello_world.png") ``` **Note:** Stable Diffusion 3.5 can also be run using the SD3 pipeline, and all mentioned optimizations and techniques apply to it as well. In total there are three official models in the SD3 family: - [`stabilityai/stable-diffusion-3-medium-diffusers`](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers) - [`stabilityai/stable-diffusion-3.5-large`](https://huggingface.co/stabilityai/stable-diffusion-3-5-large) - [`stabilityai/stable-diffusion-3.5-large-turbo`](https://huggingface.co/stabilityai/stable-diffusion-3-5-large-turbo)
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#usage-example
#usage-example
.md
172_2
An IP-Adapter lets you prompt SD3 with images, in addition to the text prompt. This is especially useful when describing complex concepts that are difficult to articulate through text alone and you have reference images. To load and use an IP-Adapter, you need: - `image_encoder`: Pre-trained vision model used to obtain image features, usually a CLIP image encoder. - `feature_extractor`: Image processor that prepares the input image for the chosen `image_encoder`. - `ip_adapter_id`: Checkpoint containing parameters of image cross attention layers and image projection. IP-Adapters are trained for a specific model architecture, so they also work in finetuned variations of the base model. You can use the [`~SD3IPAdapterMixin.set_ip_adapter_scale`] function to adjust how strongly the output aligns with the image prompt. The higher the value, the more closely the model follows the image prompt. A default value of 0.5 is typically a good balance, ensuring the model considers both the text and image prompts equally. ```python import torch from PIL import Image from diffusers import StableDiffusion3Pipeline from transformers import SiglipVisionModel, SiglipImageProcessor image_encoder_id = "google/siglip-so400m-patch14-384" ip_adapter_id = "guiyrt/InstantX-SD3.5-Large-IP-Adapter-diffusers" feature_extractor = SiglipImageProcessor.from_pretrained( image_encoder_id, torch_dtype=torch.float16 ) image_encoder = SiglipVisionModel.from_pretrained( image_encoder_id, torch_dtype=torch.float16 ).to( "cuda") pipe = StableDiffusion3Pipeline.from_pretrained( "stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.float16, feature_extractor=feature_extractor, image_encoder=image_encoder, ).to("cuda") pipe.load_ip_adapter(ip_adapter_id) pipe.set_ip_adapter_scale(0.6) ref_img = Image.open("image.jpg").convert('RGB') image = pipe( width=1024, height=1024, prompt="a cat", negative_prompt="lowres, low quality, worst quality", num_inference_steps=24, guidance_scale=5.0, ip_adapter_image=ref_img ).images[0] image.save("result.jpg") ``` <div class="justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sd3_ip_adapter_example.png"/> <figcaption class="mt-2 text-sm text-center text-gray-500">IP-Adapter examples with prompt "a cat"</figcaption> </div> <Tip> Check out [IP-Adapter](../../../using-diffusers/ip_adapter) to learn more about how IP-Adapters work. </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#image-prompting-with-ip-adapters
#image-prompting-with-ip-adapters
.md
172_3
SD3 uses three text encoders, one of which is the very large T5-XXL model. This makes it challenging to run the model on GPUs with less than 24GB of VRAM, even when using `fp16` precision. The following section outlines a few memory optimizations in Diffusers that make it easier to run SD3 on low resource hardware.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#memory-optimisations-for-sd3
#memory-optimisations-for-sd3
.md
172_4
The most basic memory optimization available in Diffusers allows you to offload the components of the model to CPU during inference in order to save memory, while seeing a slight increase in inference latency. Model offloading will only move a model component onto the GPU when it needs to be executed, while keeping the remaining components on the CPU. ```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16) pipe.enable_model_cpu_offload() image = pipe( prompt="a photo of a cat holding a sign that says hello world", negative_prompt="", num_inference_steps=28, height=1024, width=1024, guidance_scale=7.0, ).images[0] image.save("sd3_hello_world.png") ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#running-inference-with-model-offloading
#running-inference-with-model-offloading
.md
172_5
Removing the memory-intensive 4.7B parameter T5-XXL text encoder during inference can significantly decrease the memory requirements for SD3 with only a slight loss in performance. ```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_pretrained( "stabilityai/stable-diffusion-3-medium-diffusers", text_encoder_3=None, tokenizer_3=None, torch_dtype=torch.float16 ) pipe.to("cuda") image = pipe( prompt="a photo of a cat holding a sign that says hello world", negative_prompt="", num_inference_steps=28, height=1024, width=1024, guidance_scale=7.0, ).images[0] image.save("sd3_hello_world-no-T5.png") ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#dropping-the-t5-text-encoder-during-inference
#dropping-the-t5-text-encoder-during-inference
.md
172_6
We can leverage the `bitsandbytes` library to load and quantize the T5-XXL text encoder to 8-bit precision. This allows you to keep using all three text encoders while only slightly impacting performance. First install the `bitsandbytes` library. ```shell pip install bitsandbytes ``` Then load the T5-XXL model using the `BitsAndBytesConfig`. ```python import torch from diffusers import StableDiffusion3Pipeline from transformers import T5EncoderModel, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) model_id = "stabilityai/stable-diffusion-3-medium-diffusers" text_encoder = T5EncoderModel.from_pretrained( model_id, subfolder="text_encoder_3", quantization_config=quantization_config, ) pipe = StableDiffusion3Pipeline.from_pretrained( model_id, text_encoder_3=text_encoder, device_map="balanced", torch_dtype=torch.float16 ) image = pipe( prompt="a photo of a cat holding a sign that says hello world", negative_prompt="", num_inference_steps=28, height=1024, width=1024, guidance_scale=7.0, ).images[0] image.save("sd3_hello_world-8bit-T5.png") ``` You can find the end-to-end script [here](https://gist.github.com/sayakpaul/82acb5976509851f2db1a83456e504f1).
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#using-a-quantized-version-of-the-t5-text-encoder
#using-a-quantized-version-of-the-t5-text-encoder
.md
172_7
Using compiled components in the SD3 pipeline can speed up inference by as much as 4X. The following code snippet demonstrates how to compile the Transformer and VAE components of the SD3 pipeline. ```python import torch from diffusers import StableDiffusion3Pipeline torch.set_float32_matmul_precision("high") torch._inductor.config.conv_1x1_as_mm = True torch._inductor.config.coordinate_descent_tuning = True torch._inductor.config.epilogue_fusion = False torch._inductor.config.coordinate_descent_check_all_directions = True pipe = StableDiffusion3Pipeline.from_pretrained( "stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16 ).to("cuda") pipe.set_progress_bar_config(disable=True) pipe.transformer.to(memory_format=torch.channels_last) pipe.vae.to(memory_format=torch.channels_last) pipe.transformer = torch.compile(pipe.transformer, mode="max-autotune", fullgraph=True) pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) # Warm Up prompt = "a photo of a cat holding a sign that says hello world" for _ in range(3): _ = pipe(prompt=prompt, generator=torch.manual_seed(1)) # Run Inference image = pipe(prompt=prompt, generator=torch.manual_seed(1)).images[0] image.save("sd3_hello_world.png") ``` Check out the full script [here](https://gist.github.com/sayakpaul/508d89d7aad4f454900813da5d42ca97).
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#using-torch-compile-to-speed-up-inference
#using-torch-compile-to-speed-up-inference
.md
172_8
Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model. Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [`StableDiffusion3Pipeline`] for inference with bitsandbytes. ```py import torch from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SD3Transformer2DModel, StableDiffusion3Pipeline from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel quant_config = BitsAndBytesConfig(load_in_8bit=True) text_encoder_8bit = T5EncoderModel.from_pretrained( "stabilityai/stable-diffusion-3.5-large", subfolder="text_encoder_3", quantization_config=quant_config, torch_dtype=torch.float16, ) quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True) transformer_8bit = SD3Transformer2DModel.from_pretrained( "stabilityai/stable-diffusion-3.5-large", subfolder="transformer", quantization_config=quant_config, torch_dtype=torch.float16, ) pipeline = StableDiffusion3Pipeline.from_pretrained( "stabilityai/stable-diffusion-3.5-large", text_encoder=text_encoder_8bit, transformer=transformer_8bit, torch_dtype=torch.float16, device_map="balanced", ) prompt = "a tiny astronaut hatching from an egg on the moon" image = pipeline(prompt, num_inference_steps=28, guidance_scale=7.0).images[0] image.save("sd3.png") ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#quantization
#quantization
.md
172_9
By default, the T5 Text Encoder prompt uses a maximum sequence length of `256`. This can be adjusted by setting the `max_sequence_length` to accept fewer or more tokens. Keep in mind that longer sequences require additional resources and result in longer generation times, such as during batch inference. ```python prompt = "A whimsical and creative image depicting a hybrid creature that is a mix of a waffle and a hippopotamus, basking in a river of melted butter amidst a breakfast-themed landscape. It features the distinctive, bulky body shape of a hippo. However, instead of the usual grey skin, the creature’s body resembles a golden-brown, crispy waffle fresh off the griddle. The skin is textured with the familiar grid pattern of a waffle, each square filled with a glistening sheen of syrup. The environment combines the natural habitat of a hippo with elements of a breakfast table setting, a river of warm, melted butter, with oversized utensils or plates peeking out from the lush, pancake-like foliage in the background, a towering pepper mill standing in for a tree. As the sun rises in this fantastical world, it casts a warm, buttery glow over the scene. The creature, content in its butter river, lets out a yawn. Nearby, a flock of birds take flight" image = pipe( prompt=prompt, negative_prompt="", num_inference_steps=28, guidance_scale=4.5, max_sequence_length=512, ).images[0] ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#using-long-prompts-with-the-t5-text-encoder
#using-long-prompts-with-the-t5-text-encoder
.md
172_10
You can send a different prompt to the CLIP Text Encoders and the T5 Text Encoder to prevent the prompt from being truncated by the CLIP Text Encoders and to improve generation. <Tip> The prompt with the CLIP Text Encoders is still truncated to the 77 token limit. </Tip> ```python prompt = "A whimsical and creative image depicting a hybrid creature that is a mix of a waffle and a hippopotamus, basking in a river of melted butter amidst a breakfast-themed landscape. A river of warm, melted butter, pancake-like foliage in the background, a towering pepper mill standing in for a tree." prompt_3 = "A whimsical and creative image depicting a hybrid creature that is a mix of a waffle and a hippopotamus, basking in a river of melted butter amidst a breakfast-themed landscape. It features the distinctive, bulky body shape of a hippo. However, instead of the usual grey skin, the creature’s body resembles a golden-brown, crispy waffle fresh off the griddle. The skin is textured with the familiar grid pattern of a waffle, each square filled with a glistening sheen of syrup. The environment combines the natural habitat of a hippo with elements of a breakfast table setting, a river of warm, melted butter, with oversized utensils or plates peeking out from the lush, pancake-like foliage in the background, a towering pepper mill standing in for a tree. As the sun rises in this fantastical world, it casts a warm, buttery glow over the scene. The creature, content in its butter river, lets out a yawn. Nearby, a flock of birds take flight" image = pipe( prompt=prompt, prompt_3=prompt_3, negative_prompt="", num_inference_steps=28, guidance_scale=4.5, max_sequence_length=512, ).images[0] ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#sending-a-different-prompt-to-the-t5-text-encoder
#sending-a-different-prompt-to-the-t5-text-encoder
.md
172_11
Tiny AutoEncoder for Stable Diffusion (TAESD3) is a tiny distilled version of Stable Diffusion 3's VAE by [Ollin Boer Bohan](https://github.com/madebyollin/taesd) that can decode [`StableDiffusion3Pipeline`] latents almost instantly. To use with Stable Diffusion 3: ```python import torch from diffusers import StableDiffusion3Pipeline, AutoencoderTiny pipe = StableDiffusion3Pipeline.from_pretrained( "stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16 ) pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd3", torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "slice of delicious New York-style berry cheesecake" image = pipe(prompt, num_inference_steps=25).images[0] image.save("cheesecake.png") ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#tiny-autoencoder-for-stable-diffusion-3
#tiny-autoencoder-for-stable-diffusion-3
.md
172_12
The `SD3Transformer2DModel` and `StableDiffusion3Pipeline` classes support loading the original checkpoints via the `from_single_file` method. This method allows you to load the original checkpoint files that were used to train the models.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#loading-the-original-checkpoints-via-fromsinglefile
#loading-the-original-checkpoints-via-fromsinglefile
.md
172_13
```python from diffusers import SD3Transformer2DModel model = SD3Transformer2DModel.from_single_file("https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium.safetensors") ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#loading-the-original-checkpoints-for-the-sd3transformer2dmodel
#loading-the-original-checkpoints-for-the-sd3transformer2dmodel
.md
172_14
```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_single_file( "https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips.safetensors", torch_dtype=torch.float16, text_encoder_3=None ) pipe.enable_model_cpu_offload() image = pipe("a picture of a cat holding a sign that says hello world").images[0] image.save('sd3-single-file.png') ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#loading-the-single-file-checkpoint-without-t5
#loading-the-single-file-checkpoint-without-t5
.md
172_15
> [!TIP] > The following example loads a checkpoint stored in a 8-bit floating point format which requires PyTorch 2.3 or later. ```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_single_file( "https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips_t5xxlfp8.safetensors", torch_dtype=torch.float16, ) pipe.enable_model_cpu_offload() image = pipe("a picture of a cat holding a sign that says hello world").images[0] image.save('sd3-single-file-t5-fp8.png') ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#loading-the-single-file-checkpoint-with-t5
#loading-the-single-file-checkpoint-with-t5
.md
172_16
```python import torch from diffusers import SD3Transformer2DModel, StableDiffusion3Pipeline transformer = SD3Transformer2DModel.from_single_file( "https://huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo/blob/main/sd3.5_large.safetensors", torch_dtype=torch.bfloat16, ) pipe = StableDiffusion3Pipeline.from_pretrained( "stabilityai/stable-diffusion-3.5-large", transformer=transformer, torch_dtype=torch.bfloat16, ) pipe.enable_model_cpu_offload() image = pipe("a cat holding a sign that says hello world").images[0] image.save("sd35.png") ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#loading-the-single-file-checkpoint-for-the-stable-diffusion-35-transformer-model
#loading-the-single-file-checkpoint-for-the-stable-diffusion-35-transformer-model
.md
172_17
StableDiffusion3Pipeline Args: transformer ([`SD3Transformer2DModel`]): Conditional Transformer (MMDiT) architecture to denoise the encoded image latents. scheduler ([`FlowMatchEulerDiscreteScheduler`]): A scheduler to be used in combination with `transformer` to denoise the encoded image latents. vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModelWithProjection`]): [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant, with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size` as its dimension. text_encoder_2 ([`CLIPTextModelWithProjection`]): [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) variant. text_encoder_3 ([`T5EncoderModel`]): Frozen text-encoder. Stable Diffusion 3 uses [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer_2 (`CLIPTokenizer`): Second Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer_3 (`T5TokenizerFast`): Tokenizer of class [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer). image_encoder (`PreTrainedModel`, *optional*): Pre-trained Vision Model for IP Adapter. feature_extractor (`BaseImageProcessor`, *optional*): Image processor for IP Adapter. - all - __call__
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3/#stablediffusion3pipeline
#stablediffusion3pipeline
.md
172_18
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/inpaint.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/inpaint/
.md
173_0
The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/pipelines/stable_diffusion/inpaint.md
https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/inpaint/#inpainting
#inpainting
.md
173_1