source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet.md
https://huggingface.co/docs/diffusers/en/api/models/unet/#unet1dmodel
.md
in_channels (`int`, *optional*, defaults to 2): Number of channels in the input sample. out_channels (`int`, *optional*, defaults to 2): Number of channels in the output. extra_in_channels (`int`, *optional*, defaults to 0): Number of additional channels to be added to the input of the first down block. Useful for cases where the input data has more channels than what the model was initially designed for. time_embedding_type (`str`, *optional*, defaults to `"fourier"`): Type of time embedding to use.
225_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet.md
https://huggingface.co/docs/diffusers/en/api/models/unet/#unet1dmodel
.md
time_embedding_type (`str`, *optional*, defaults to `"fourier"`): Type of time embedding to use. freq_shift (`float`, *optional*, defaults to 0.0): Frequency shift for Fourier time embedding. flip_sin_to_cos (`bool`, *optional*, defaults to `False`): Whether to flip sin to cos for Fourier time embedding. down_block_types (`Tuple[str]`, *optional*, defaults to `("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")`): Tuple of downsample block types.
225_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet.md
https://huggingface.co/docs/diffusers/en/api/models/unet/#unet1dmodel
.md
Tuple of downsample block types. up_block_types (`Tuple[str]`, *optional*, defaults to `("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")`): Tuple of upsample block types. block_out_channels (`Tuple[int]`, *optional*, defaults to `(32, 32, 64)`): Tuple of block output channels. mid_block_type (`str`, *optional*, defaults to `"UNetMidBlock1D"`): Block type for middle of UNet. out_block_type (`str`, *optional*, defaults to `None`): Optional output processing block of UNet.
225_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet.md
https://huggingface.co/docs/diffusers/en/api/models/unet/#unet1dmodel
.md
out_block_type (`str`, *optional*, defaults to `None`): Optional output processing block of UNet. act_fn (`str`, *optional*, defaults to `None`): Optional activation function in UNet blocks. norm_num_groups (`int`, *optional*, defaults to 8): The number of groups for normalization. layers_per_block (`int`, *optional*, defaults to 1): The number of layers per block. downsample_each_block (`int`, *optional*, defaults to `False`): Experimental feature for using a UNet without upsampling.
225_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet.md
https://huggingface.co/docs/diffusers/en/api/models/unet/#unet1doutput
.md
UNet1DOutput The output of [`UNet1DModel`]. Args: sample (`torch.Tensor` of shape `(batch_size, num_channels, sample_size)`): The hidden states output from the last layer of the model.
225_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
226_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
226_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not.
226_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model.
226_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
The abstract from the paper is:
226_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the
226_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a
226_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
226_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
UNet3DConditionModel A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample shaped output. This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented for all models (such as downloading or saving). Parameters: sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): Height and width of input/output sample.
226_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
Parameters: sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): Height and width of input/output sample. in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample. out_channels (`int`, *optional*, defaults to 4): The number of channels in the output. down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "DownBlock3D")`): The tuple of downsample blocks to use.
226_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
The tuple of downsample blocks to use. up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D")`): The tuple of upsample blocks to use. block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): The tuple of output channels for each block. layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block.
226_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization.
226_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. If `None`, normalization and activation layers is skipped in post-processing. norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. cross_attention_dim (`int`, *optional*, defaults to 1024): The dimension of the cross attention features. attention_head_dim (`int`, *optional*, defaults to 64): The dimension of the attention heads.
226_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionmodel
.md
attention_head_dim (`int`, *optional*, defaults to 64): The dimension of the attention heads. num_attention_heads (`int`, *optional*): The number of attention heads. time_cond_proj_dim (`int`, *optional*, defaults to `None`): The dimension of `cond_proj` layer in the timestep embedding.
226_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet3d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet3d-cond/#unet3dconditionoutput
.md
UNet3DConditionOutput The output of [`UNet3DConditionModel`]. Args: sample (`torch.Tensor` of shape `(batch_size, num_channels, num_frames, height, width)`): The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model.
226_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
227_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
227_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not.
227_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model.
227_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
The abstract from the paper is:
227_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the
227_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a
227_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
227_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
UNet2DConditionModel A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample shaped output. This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented for all models (such as downloading or saving). Parameters: sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): Height and width of input/output sample.
227_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
Parameters: sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): Height and width of input/output sample. in_channels (`int`, *optional*, defaults to 4): Number of channels in the input sample. out_channels (`int`, *optional*, defaults to 4): Number of channels in the output. center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample. flip_sin_to_cos (`bool`, *optional*, defaults to `True`): Whether to flip the sin to cos in the time embedding.
227_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
flip_sin_to_cos (`bool`, *optional*, defaults to `True`): Whether to flip the sin to cos in the time embedding. freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`): The tuple of downsample blocks to use. mid_block_type (`str`, *optional*, defaults to `"UNetMidBlock2DCrossAttn"`):
227_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
The tuple of downsample blocks to use. mid_block_type (`str`, *optional*, defaults to `"UNetMidBlock2DCrossAttn"`): Block type for middle of UNet, it can be one of `UNetMidBlock2DCrossAttn`, `UNetMidBlock2D`, or `UNetMidBlock2DSimpleCrossAttn`. If `None`, the mid block layer is skipped. up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")`): The tuple of upsample blocks to use.
227_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
The tuple of upsample blocks to use. only_cross_attention(`bool` or `Tuple[bool]`, *optional*, default to `False`): Whether to include self-attention in the basic transformer blocks, see [`~models.attention.BasicTransformerBlock`]. block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): The tuple of output channels for each block. layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block.
227_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use.
227_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. If `None`, normalization and activation layers is skipped in post-processing. norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. cross_attention_dim (`int` or `Tuple[int]`, *optional*, defaults to 1280): The dimension of the cross attention features.
227_2_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
cross_attention_dim (`int` or `Tuple[int]`, *optional*, defaults to 1280): The dimension of the cross attention features. transformer_layers_per_block (`int`, `Tuple[int]`, or `Tuple[Tuple]` , *optional*, defaults to 1): The number of transformer blocks of type [`~models.attention.BasicTransformerBlock`]. Only relevant for [`~models.unets.unet_2d_blocks.CrossAttnDownBlock2D`], [`~models.unets.unet_2d_blocks.CrossAttnUpBlock2D`], [`~models.unets.unet_2d_blocks.UNetMidBlock2DCrossAttn`].
227_2_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
[`~models.unets.unet_2d_blocks.UNetMidBlock2DCrossAttn`]. reverse_transformer_layers_per_block : (`Tuple[Tuple]`, *optional*, defaults to None): The number of transformer blocks of type [`~models.attention.BasicTransformerBlock`], in the upsampling blocks of the U-Net. Only relevant if `transformer_layers_per_block` is of type `Tuple[Tuple]` and for [`~models.unets.unet_2d_blocks.CrossAttnDownBlock2D`], [`~models.unets.unet_2d_blocks.CrossAttnUpBlock2D`],
227_2_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
[`~models.unets.unet_2d_blocks.CrossAttnDownBlock2D`], [`~models.unets.unet_2d_blocks.CrossAttnUpBlock2D`], [`~models.unets.unet_2d_blocks.UNetMidBlock2DCrossAttn`]. encoder_hid_dim (`int`, *optional*, defaults to None): If `encoder_hid_dim_type` is defined, `encoder_hidden_states` will be projected from `encoder_hid_dim` dimension to `cross_attention_dim`. encoder_hid_dim_type (`str`, *optional*, defaults to `None`):
227_2_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
dimension to `cross_attention_dim`. encoder_hid_dim_type (`str`, *optional*, defaults to `None`): If given, the `encoder_hidden_states` and potentially other embeddings are down-projected to text embeddings of dimension `cross_attention` according to `encoder_hid_dim_type`. attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads. num_attention_heads (`int`, *optional*): The number of attention heads. If not defined, defaults to `attention_head_dim`
227_2_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
num_attention_heads (`int`, *optional*): The number of attention heads. If not defined, defaults to `attention_head_dim` resnet_time_scale_shift (`str`, *optional*, defaults to `"default"`): Time scale shift config for ResNet blocks (see [`~models.resnet.ResnetBlock2D`]). Choose from `default` or `scale_shift`. class_embed_type (`str`, *optional*, defaults to `None`): The type of class embedding to use which is ultimately summed with the time embeddings. Choose from `None`,
227_2_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
The type of class embedding to use which is ultimately summed with the time embeddings. Choose from `None`, `"timestep"`, `"identity"`, `"projection"`, or `"simple_projection"`. addition_embed_type (`str`, *optional*, defaults to `None`): Configures an optional embedding which will be summed with the time embeddings. Choose from `None` or "text". "text" will use the `TextTimeEmbedding` layer. addition_time_embed_dim: (`int`, *optional*, defaults to `None`): Dimension for the timestep embeddings.
227_2_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
addition_time_embed_dim: (`int`, *optional*, defaults to `None`): Dimension for the timestep embeddings. num_class_embeds (`int`, *optional*, defaults to `None`): Input dimension of the learnable embedding matrix to be projected to `time_embed_dim`, when performing class conditioning with `class_embed_type` equal to `None`. time_embedding_type (`str`, *optional*, defaults to `positional`): The type of position embedding to use for timesteps. Choose from `positional` or `fourier`.
227_2_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
The type of position embedding to use for timesteps. Choose from `positional` or `fourier`. time_embedding_dim (`int`, *optional*, defaults to `None`): An optional override for the dimension of the projected time embedding. time_embedding_act_fn (`str`, *optional*, defaults to `None`): Optional activation function to use only once on the time embeddings before they are passed to the rest of the UNet. Choose from `silu`, `mish`, `gelu`, and `swish`. timestep_post_act (`str`, *optional*, defaults to `None`):
227_2_14
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
the UNet. Choose from `silu`, `mish`, `gelu`, and `swish`. timestep_post_act (`str`, *optional*, defaults to `None`): The second activation function to use in timestep embedding. Choose from `silu`, `mish` and `gelu`. time_cond_proj_dim (`int`, *optional*, defaults to `None`): The dimension of `cond_proj` layer in the timestep embedding. conv_in_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_in` layer.
227_2_15
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
conv_in_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_in` layer. conv_out_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_out` layer. projection_class_embeddings_input_dim (`int`, *optional*): The dimension of the `class_labels` input when `class_embed_type="projection"`. Required when `class_embed_type="projection"`. class_embeddings_concat (`bool`, *optional*, defaults to `False`): Whether to concatenate the time embeddings with the class embeddings.
227_2_16
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionmodel
.md
embeddings with the class embeddings. mid_block_only_cross_attention (`bool`, *optional*, defaults to `None`): Whether to use cross attention with the mid block when using the `UNetMidBlock2DSimpleCrossAttn`. If `only_cross_attention` is given as a single boolean and `mid_block_only_cross_attention` is `None`, the `only_cross_attention` value is used as the value for `mid_block_only_cross_attention`. Default to `False` otherwise.
227_2_17
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#unet2dconditionoutput
.md
UNet2DConditionOutput The output of [`UNet2DConditionModel`]. Args: sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`): The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model.
227_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#flaxunet2dconditionmodel
.md
[[autodoc]] FlaxUNet2DConditionModel: No module named 'flax'
227_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/models/unet2d-cond.md
https://huggingface.co/docs/diffusers/en/api/models/unet2d-cond/#flaxunet2dconditionoutput
.md
[[autodoc]] FlaxUNet2DConditionOutput: No module named 'flax'
227_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/ip_adapter.md
https://huggingface.co/docs/diffusers/en/api/loaders/ip_adapter/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
228_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/ip_adapter.md
https://huggingface.co/docs/diffusers/en/api/loaders/ip_adapter/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
228_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/ip_adapter.md
https://huggingface.co/docs/diffusers/en/api/loaders/ip_adapter/#ip-adapter
.md
[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. <Tip> Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter [loading](../../using-diffusers/loading_adapters#ip-adapter) guide, and you can see how to use it in the [usage](../../using-diffusers/ip_adapter) guide. </Tip>
228_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/ip_adapter.md
https://huggingface.co/docs/diffusers/en/api/loaders/ip_adapter/#ipadaptermixin
.md
IPAdapterMixin Mixin for handling IP Adapters.
228_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/ip_adapter.md
https://huggingface.co/docs/diffusers/en/api/loaders/ip_adapter/#sd3ipadaptermixin
.md
SD3IPAdapterMixin Mixin for handling StableDiffusion 3 IP Adapters. - all - is_ip_adapter_active
228_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/ip_adapter.md
https://huggingface.co/docs/diffusers/en/api/loaders/ip_adapter/#ipadaptermaskprocessor
.md
IPAdapterMaskProcessor Image processor for IP Adapter image masks. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to downscale the image's (height, width) dimensions to multiples of `vae_scale_factor`. vae_scale_factor (`int`, *optional*, defaults to `8`): VAE scale factor. If `do_resize` is `True`, the image is automatically resized to multiples of this factor. resample (`str`, *optional*, defaults to `lanczos`): Resampling filter to use when resizing the image.
228_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/ip_adapter.md
https://huggingface.co/docs/diffusers/en/api/loaders/ip_adapter/#ipadaptermaskprocessor
.md
resample (`str`, *optional*, defaults to `lanczos`): Resampling filter to use when resizing the image. do_normalize (`bool`, *optional*, defaults to `False`): Whether to normalize the image to [-1,1]. do_binarize (`bool`, *optional*, defaults to `True`): Whether to binarize the image to 0/1. do_convert_grayscale (`bool`, *optional*, defaults to be `True`): Whether to convert the images to grayscale format.
228_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
229_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
229_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#lora
.md
LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the denoiser, text encoder or both. The denoiser usually corresponds to a UNet ([`UNet2DConditionModel`], for example) or a Transformer ([`SD3Transformer2DModel`], for example). There are several classes for
229_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#lora
.md
for example) or a Transformer ([`SD3Transformer2DModel`], for example). There are several classes for loading LoRA weights:
229_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#lora
.md
- [`StableDiffusionLoraLoaderMixin`] provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. - [`StableDiffusionXLLoraLoaderMixin`] is a [Stable Diffusion (SDXL)](../../api/pipelines/stable_diffusion/stable_diffusion_xl) version of the [`StableDiffusionLoraLoaderMixin`] class for loading and saving LoRA weights. It can only be used with the SDXL model.
229_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#lora
.md
- [`SD3LoraLoaderMixin`] provides similar functions for [Stable Diffusion 3](https://huggingface.co/blog/sd3). - [`FluxLoraLoaderMixin`] provides similar functions for [Flux](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux). - [`CogVideoXLoraLoaderMixin`] provides similar functions for [CogVideoX](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogvideox).
229_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#lora
.md
- [`Mochi1LoraLoaderMixin`] provides similar functions for [Mochi](https://huggingface.co/docs/diffusers/main/en/api/pipelines/mochi). - [`AmusedLoraLoaderMixin`] is for the [`AmusedPipeline`]. - [`LoraBaseMixin`] provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more. <Tip> To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide. </Tip>
229_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#stablediffusionloraloadermixin
.md
StableDiffusionLoraLoaderMixin Load LoRA layers into Stable Diffusion [`UNet2DConditionModel`] and [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel).
229_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#stablediffusionxlloraloadermixin
.md
StableDiffusionXLLoraLoaderMixin Load LoRA layers into Stable Diffusion XL [`UNet2DConditionModel`], [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), and [`CLIPTextModelWithProjection`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection).
229_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#sd3loraloadermixin
.md
SD3LoraLoaderMixin Load LoRA layers into [`SD3Transformer2DModel`], [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), and [`CLIPTextModelWithProjection`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection). Specific to [`StableDiffusion3Pipeline`].
229_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#fluxloraloadermixin
.md
FluxLoraLoaderMixin Load LoRA layers into [`FluxTransformer2DModel`], [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel). Specific to [`StableDiffusion3Pipeline`].
229_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#cogvideoxloraloadermixin
.md
CogVideoXLoraLoaderMixin Load LoRA layers into [`CogVideoXTransformer3DModel`]. Specific to [`CogVideoXPipeline`].
229_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#mochi1loraloadermixin
.md
Mochi1LoraLoaderMixin Load LoRA layers into [`MochiTransformer3DModel`]. Specific to [`MochiPipeline`].
229_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#amusedloraloadermixin
.md
AmusedLoraLoaderMixin
229_8_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/lora.md
https://huggingface.co/docs/diffusers/en/api/loaders/lora/#lorabasemixin
.md
LoraBaseMixin Utility class for handling LoRAs.
229_9_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/single_file.md
https://huggingface.co/docs/diffusers/en/api/loaders/single_file/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
230_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/single_file.md
https://huggingface.co/docs/diffusers/en/api/loaders/single_file/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
230_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/single_file.md
https://huggingface.co/docs/diffusers/en/api/loaders/single_file/#single-files
.md
The [`~loaders.FromSingleFileMixin.from_single_file`] method allows you to load: * a model stored in a single file, which is useful if you're working with models from the diffusion ecosystem, like Automatic1111, and commonly rely on a single-file layout to store and share models * a model stored in their originally distributed layout, which is useful if you're working with models finetuned with other services, and want to load it directly into Diffusers model objects and pipelines > [!TIP]
230_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/single_file.md
https://huggingface.co/docs/diffusers/en/api/loaders/single_file/#single-files
.md
> [!TIP] > Read the [Model files and layouts](../../using-diffusers/other-formats) guide to learn more about the Diffusers-multifolder layout versus the single-file layout, and how to load models stored in these different layouts.
230_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/single_file.md
https://huggingface.co/docs/diffusers/en/api/loaders/single_file/#supported-pipelines
.md
- [`StableDiffusionPipeline`] - [`StableDiffusionImg2ImgPipeline`] - [`StableDiffusionInpaintPipeline`] - [`StableDiffusionControlNetPipeline`] - [`StableDiffusionControlNetImg2ImgPipeline`] - [`StableDiffusionControlNetInpaintPipeline`] - [`StableDiffusionUpscalePipeline`] - [`StableDiffusionXLPipeline`] - [`StableDiffusionXLImg2ImgPipeline`] - [`StableDiffusionXLInpaintPipeline`] - [`StableDiffusionXLInstructPix2PixPipeline`] - [`StableDiffusionXLControlNetPipeline`]
230_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/single_file.md
https://huggingface.co/docs/diffusers/en/api/loaders/single_file/#supported-pipelines
.md
- [`StableDiffusionXLInpaintPipeline`] - [`StableDiffusionXLInstructPix2PixPipeline`] - [`StableDiffusionXLControlNetPipeline`] - [`StableDiffusionXLKDiffusionPipeline`] - [`StableDiffusion3Pipeline`] - [`LatentConsistencyModelPipeline`] - [`LatentConsistencyModelImg2ImgPipeline`] - [`StableDiffusionControlNetXSPipeline`] - [`StableDiffusionXLControlNetXSPipeline`] - [`LEditsPPPipelineStableDiffusion`] - [`LEditsPPPipelineStableDiffusionXL`] - [`PIAPipeline`]
230_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/single_file.md
https://huggingface.co/docs/diffusers/en/api/loaders/single_file/#supported-models
.md
- [`UNet2DConditionModel`] - [`StableCascadeUNet`] - [`AutoencoderKL`] - [`ControlNetModel`] - [`SD3Transformer2DModel`] - [`FluxTransformer2DModel`]
230_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/single_file.md
https://huggingface.co/docs/diffusers/en/api/loaders/single_file/#fromsinglefilemixin
.md
FromSingleFileMixin Load model weights saved in the `.ckpt` format into a [`DiffusionPipeline`].
230_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/single_file.md
https://huggingface.co/docs/diffusers/en/api/loaders/single_file/#fromoriginalmodelmixin
.md
FromOriginalModelMixin Load pretrained weights saved in the `.ckpt` or `.safetensors` format into a model.
230_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/transformer_sd3.md
https://huggingface.co/docs/diffusers/en/api/loaders/transformer_sd3/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
231_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/transformer_sd3.md
https://huggingface.co/docs/diffusers/en/api/loaders/transformer_sd3/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
231_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/transformer_sd3.md
https://huggingface.co/docs/diffusers/en/api/loaders/transformer_sd3/#sd3transformer2d
.md
This class is useful when *only* loading weights into a [`SD3Transformer2DModel`]. If you need to load weights into the text encoder or a text encoder and SD3Transformer2DModel, check [`SD3LoraLoaderMixin`](lora#diffusers.loaders.SD3LoraLoaderMixin) class instead. The [`SD3Transformer2DLoadersMixin`] class currently only loads IP-Adapter weights, but will be used in the future to save weights and load LoRAs. <Tip>
231_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/transformer_sd3.md
https://huggingface.co/docs/diffusers/en/api/loaders/transformer_sd3/#sd3transformer2d
.md
<Tip> To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide. </Tip>
231_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/transformer_sd3.md
https://huggingface.co/docs/diffusers/en/api/loaders/transformer_sd3/#sd3transformer2dloadersmixin
.md
SD3Transformer2DLoadersMixin Load IP-Adapters and LoRA layers into a `[SD3Transformer2DModel]`. - all - _load_ip_adapter_weights
231_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/textual_inversion.md
https://huggingface.co/docs/diffusers/en/api/loaders/textual_inversion/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
232_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/textual_inversion.md
https://huggingface.co/docs/diffusers/en/api/loaders/textual_inversion/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
232_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/textual_inversion.md
https://huggingface.co/docs/diffusers/en/api/loaders/textual_inversion/#textual-inversion
.md
Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. [`TextualInversionLoaderMixin`] provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. <Tip>
232_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/textual_inversion.md
https://huggingface.co/docs/diffusers/en/api/loaders/textual_inversion/#textual-inversion
.md
<Tip> To learn more about how to load Textual Inversion embeddings, see the [Textual Inversion](../../using-diffusers/loading_adapters#textual-inversion) loading guide. </Tip>
232_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/textual_inversion.md
https://huggingface.co/docs/diffusers/en/api/loaders/textual_inversion/#textualinversionloadermixin
.md
TextualInversionLoaderMixin Load Textual Inversion tokens and embeddings to the tokenizer and text encoder.
232_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/unet.md
https://huggingface.co/docs/diffusers/en/api/loaders/unet/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
233_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/unet.md
https://huggingface.co/docs/diffusers/en/api/loaders/unet/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
233_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/unet.md
https://huggingface.co/docs/diffusers/en/api/loaders/unet/#unet
.md
Some training methods - like LoRA and Custom Diffusion - typically target the UNet's attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model's parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you're *only* loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the
233_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/unet.md
https://huggingface.co/docs/diffusers/en/api/loaders/unet/#unet
.md
*only* loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] function instead.
233_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/unet.md
https://huggingface.co/docs/diffusers/en/api/loaders/unet/#unet
.md
The [`UNet2DConditionLoadersMixin`] class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters. <Tip> To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide. </Tip>
233_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/unet.md
https://huggingface.co/docs/diffusers/en/api/loaders/unet/#unet2dconditionloadersmixin
.md
UNet2DConditionLoadersMixin Load LoRA layers into a [`UNet2DCondtionModel`].
233_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/peft.md
https://huggingface.co/docs/diffusers/en/api/loaders/peft/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
234_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/peft.md
https://huggingface.co/docs/diffusers/en/api/loaders/peft/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
234_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/peft.md
https://huggingface.co/docs/diffusers/en/api/loaders/peft/#peft
.md
Diffusers supports loading adapters such as [LoRA](../../using-diffusers/loading_adapters) with the [PEFT](https://huggingface.co/docs/peft/index) library with the [`~loaders.peft.PeftAdapterMixin`] class. This allows modeling classes in Diffusers like [`UNet2DConditionModel`], [`SD3Transformer2DModel`] to operate with an adapter. <Tip> Refer to the [Inference with PEFT](../../tutorials/using_peft_for_inference.md) tutorial for an overview of how to use PEFT in Diffusers for inference. </Tip>
234_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/loaders/peft.md
https://huggingface.co/docs/diffusers/en/api/loaders/peft/#peftadaptermixin
.md
PeftAdapterMixin A class containing all functions for loading and using adapters weights that are supported in PEFT library. For more details about adapters and injecting them in a base model, check out the PEFT [documentation](https://huggingface.co/docs/peft/index). Install the latest version of PEFT, and use this mixin to: - Attach new adapters in the model. - Attach multiple adapters and iteratively activate/deactivate them. - Activate/deactivate all adapters from the model.
234_2_0