source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/diffusers/en/quantization/bitsandbytes/#nested-quantization
.md
Nested quantization is a technique that can save additional memory at no additional performance cost. This feature performs a second quantization of the already quantized weights to save an additional 0.4 bits/parameter. ```py from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig from diffusers import FluxTransformer2DModel from transformers import T5EncoderModel
43_8_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/diffusers/en/quantization/bitsandbytes/#nested-quantization
.md
from diffusers import FluxTransformer2DModel from transformers import T5EncoderModel quant_config = TransformersBitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) text_encoder_2_4bit = T5EncoderModel.from_pretrained( "black-forest-labs/FLUX.1-dev", subfolder="text_encoder_2", quantization_config=quant_config, torch_dtype=torch.float16, ) quant_config = DiffusersBitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, )
43_8_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/diffusers/en/quantization/bitsandbytes/#nested-quantization
.md
quant_config = DiffusersBitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) transformer_4bit = FluxTransformer2DModel.from_pretrained( "black-forest-labs/FLUX.1-dev", subfolder="transformer", quantization_config=quant_config, torch_dtype=torch.float16, ) ```
43_8_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/diffusers/en/quantization/bitsandbytes/#dequantizing-bitsandbytes-models
.md
Once quantized, you can dequantize a model to its original precision, but this might result in a small loss of quality. Make sure you have enough GPU RAM to fit the dequantized model. ```python from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig from diffusers import FluxTransformer2DModel from transformers import T5EncoderModel
43_9_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/diffusers/en/quantization/bitsandbytes/#dequantizing-bitsandbytes-models
.md
from diffusers import FluxTransformer2DModel from transformers import T5EncoderModel quant_config = TransformersBitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) text_encoder_2_4bit = T5EncoderModel.from_pretrained( "black-forest-labs/FLUX.1-dev", subfolder="text_encoder_2", quantization_config=quant_config, torch_dtype=torch.float16, ) quant_config = DiffusersBitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, )
43_9_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/diffusers/en/quantization/bitsandbytes/#dequantizing-bitsandbytes-models
.md
quant_config = DiffusersBitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) transformer_4bit = FluxTransformer2DModel.from_pretrained( "black-forest-labs/FLUX.1-dev", subfolder="transformer", quantization_config=quant_config, torch_dtype=torch.float16, ) text_encoder_2_4bit.dequantize() transformer_4bit.dequantize() ```
43_9_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/diffusers/en/quantization/bitsandbytes/#resources
.md
* [End-to-end notebook showing Flux.1 Dev inference in a free-tier Colab](https://gist.github.com/sayakpaul/c76bd845b48759e11687ac550b99d8b4) * [Training](https://gist.github.com/sayakpaul/05afd428bc089b47af7c016e42004527)
43_10_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/
.md
<!-- Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
44_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
44_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#torchao
.md
[TorchAO](https://github.com/pytorch/ao) is an architecture optimization library for PyTorch. It provides high-performance dtypes, optimization techniques, and kernels for inference and training, featuring composability with native PyTorch features like [torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html), FullyShardedDataParallel (FSDP), and more. Before you begin, make sure you have Pytorch 2.5+ and TorchAO installed. ```bash pip install -U torch torchao ```
44_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#torchao
.md
Before you begin, make sure you have Pytorch 2.5+ and TorchAO installed. ```bash pip install -U torch torchao ``` Quantize a model by passing [`TorchAoConfig`] to [`~ModelMixin.from_pretrained`] (you can also load pre-quantized models). This works for any model in any modality, as long as it supports loading with [Accelerate](https://hf.co/docs/accelerate/index) and contains `torch.nn.Linear` layers. The example below only quantizes the weights to int8. ```python import torch
44_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#torchao
.md
The example below only quantizes the weights to int8. ```python import torch from diffusers import FluxPipeline, FluxTransformer2DModel, TorchAoConfig
44_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#torchao
.md
model_id = "black-forest-labs/FLUX.1-dev" dtype = torch.bfloat16 quantization_config = TorchAoConfig("int8wo") transformer = FluxTransformer2DModel.from_pretrained( model_id, subfolder="transformer", quantization_config=quantization_config, torch_dtype=dtype, ) pipe = FluxPipeline.from_pretrained( model_id, transformer=transformer, torch_dtype=dtype, ) pipe.to("cuda")
44_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#torchao
.md
# Without quantization: ~31.447 GB # With quantization: ~20.40 GB print(f"Pipeline memory usage: {torch.cuda.max_memory_reserved() / 1024**3:.3f} GB")
44_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#torchao
.md
prompt = "A cat holding a sign that says hello world" image = pipe( prompt, num_inference_steps=50, guidance_scale=4.5, max_sequence_length=512 ).images[0] image.save("output.png") ``` TorchAO is fully compatible with [torch.compile](./optimization/torch2.0#torchcompile), setting it apart from other quantization methods. This makes it easy to speed up inference with just one line of code. ```python # In the above code, add the following after initializing the transformer
44_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#torchao
.md
```python # In the above code, add the following after initializing the transformer transformer = torch.compile(transformer, mode="max-autotune", fullgraph=True) ``` For speed and memory benchmarks on Flux and CogVideoX, please refer to the table [here](https://github.com/huggingface/diffusers/pull/10009#issue-2688781450). You can also find some torchao [benchmarks](https://github.com/pytorch/ao/tree/main/torchao/quantization#benchmarks) numbers for various hardware.
44_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#torchao
.md
torchao also supports an automatic quantization API through [autoquant](https://github.com/pytorch/ao/blob/main/torchao/quantization/README.md#autoquantization). Autoquantization determines the best quantization strategy applicable to a model by comparing the performance of each technique on chosen input types and shapes. Currently, this can be used directly on the underlying modeling components. Diffusers will also expose an autoquant configuration option in the future.
44_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#torchao
.md
The `TorchAoConfig` class accepts three parameters: - `quant_type`: A string value mentioning one of the quantization types below. - `modules_to_not_convert`: A list of module full/partial module names for which quantization should not be performed. For example, to not perform any quantization of the [`FluxTransformer2DModel`]'s first block, one would specify: `modules_to_not_convert=["single_transformer_blocks.0"]`.
44_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#torchao
.md
- `kwargs`: A dict of keyword arguments to pass to the underlying quantization method which will be invoked based on `quant_type`.
44_1_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#supported-quantization-types
.md
torchao supports weight-only quantization and weight and dynamic-activation quantization for int8, float3-float8, and uint1-uint7. Weight-only quantization stores the model weights in a specific low-bit data type but performs computation with a higher-precision data type, like `bfloat16`. This lowers the memory requirements from model weights but retains the memory peaks for activation computation.
44_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#supported-quantization-types
.md
Dynamic activation quantization stores the model weights in a low-bit dtype, while also quantizing the activations on-the-fly to save additional memory. This lowers the memory requirements from model weights, while also lowering the memory overhead from activation computations. However, this may come at a quality tradeoff at times, so it is recommended to test different models thoroughly. The quantization methods supported are as follows: | **Category** | **Full Function Names** | **Shorthands** |
44_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#supported-quantization-types
.md
The quantization methods supported are as follows: | **Category** | **Full Function Names** | **Shorthands** | |--------------|-------------------------|----------------| | **Integer quantization** | `int4_weight_only`, `int8_dynamic_activation_int4_weight`, `int8_weight_only`, `int8_dynamic_activation_int8_weight` | `int4wo`, `int4dq`, `int8wo`, `int8dq` |
44_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#supported-quantization-types
.md
| **Floating point 8-bit quantization** | `float8_weight_only`, `float8_dynamic_activation_float8_weight`, `float8_static_activation_float8_weight` | `float8wo`, `float8wo_e5m2`, `float8wo_e4m3`, `float8dq`, `float8dq_e4m3`, `float8_e4m3_tensor`, `float8_e4m3_row` | | **Floating point X-bit quantization** | `fpx_weight_only` | `fpX_eAwB` where `X` is the number of bits (1-7), `A` is exponent bits, and `B` is mantissa bits. Constraint: `X == A + B + 1` |
44_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#supported-quantization-types
.md
| **Unsigned Integer quantization** | `uintx_weight_only` | `uint1wo`, `uint2wo`, `uint3wo`, `uint4wo`, `uint5wo`, `uint6wo`, `uint7wo` | Some quantization methods are aliases (for example, `int8wo` is the commonly used shorthand for `int8_weight_only`). This allows using the quantization methods described in the torchao docs as-is, while also making it convenient to remember their shorthand notations.
44_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#supported-quantization-types
.md
Refer to the official torchao documentation for a better understanding of the available quantization methods and the exhaustive list of configuration options available.
44_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#serializing-and-deserializing-quantized-models
.md
To serialize a quantized model in a given dtype, first load the model with the desired quantization dtype and then save it using the [`~ModelMixin.save_pretrained`] method. ```python import torch from diffusers import FluxTransformer2DModel, TorchAoConfig
44_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#serializing-and-deserializing-quantized-models
.md
quantization_config = TorchAoConfig("int8wo") transformer = FluxTransformer2DModel.from_pretrained( "black-forest-labs/Flux.1-Dev", subfolder="transformer", quantization_config=quantization_config, torch_dtype=torch.bfloat16, ) transformer.save_pretrained("/path/to/flux_int8wo", safe_serialization=False) ``` To load a serialized quantized model, use the [`~ModelMixin.from_pretrained`] method. ```python import torch from diffusers import FluxPipeline, FluxTransformer2DModel
44_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#serializing-and-deserializing-quantized-models
.md
transformer = FluxTransformer2DModel.from_pretrained("/path/to/flux_int8wo", torch_dtype=torch.bfloat16, use_safetensors=False) pipe = FluxPipeline.from_pretrained("black-forest-labs/Flux.1-Dev", transformer=transformer, torch_dtype=torch.bfloat16) pipe.to("cuda")
44_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#serializing-and-deserializing-quantized-models
.md
prompt = "A cat holding a sign that says hello world" image = pipe(prompt, num_inference_steps=30, guidance_scale=7.0).images[0] image.save("output.png") ```
44_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#serializing-and-deserializing-quantized-models
.md
image.save("output.png") ``` Some quantization methods, such as `uint4wo`, cannot be loaded directly and may result in an `UnpicklingError` when trying to load the models, but work as expected when saving them. In order to work around this, one can load the state dict manually into the model. Note, however, that this requires using `weights_only=False` in `torch.load`, so it should be run only if the weights were obtained from a trustable source. ```python import torch
44_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#serializing-and-deserializing-quantized-models
.md
```python import torch from accelerate import init_empty_weights from diffusers import FluxPipeline, FluxTransformer2DModel, TorchAoConfig
44_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#serializing-and-deserializing-quantized-models
.md
# Serialize the model transformer = FluxTransformer2DModel.from_pretrained( "black-forest-labs/Flux.1-Dev", subfolder="transformer", quantization_config=TorchAoConfig("uint4wo"), torch_dtype=torch.bfloat16, ) transformer.save_pretrained("/path/to/flux_uint4wo", safe_serialization=False, max_shard_size="50GB") # ...
44_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#serializing-and-deserializing-quantized-models
.md
# Load the model state_dict = torch.load("/path/to/flux_uint4wo/diffusion_pytorch_model.bin", weights_only=False, map_location="cpu") with init_empty_weights(): transformer = FluxTransformer2DModel.from_config("/path/to/flux_uint4wo/config.json") transformer.load_state_dict(state_dict, strict=True, assign=True) ```
44_3_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/diffusers/en/quantization/torchao/#resources
.md
- [TorchAO Quantization API](https://github.com/pytorch/ao/blob/main/torchao/quantization/README.md) - [Diffusers-TorchAO examples](https://github.com/sayakpaul/diffusers-torchao)
44_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/diffusers/en/quantization/overview/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
45_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/diffusers/en/quantization/overview/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
45_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/diffusers/en/quantization/overview/#quantization
.md
Quantization techniques focus on representing data with less information while also trying to not lose too much accuracy. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory-usage. Lower precision can also speedup inference because it takes less time to perform calculations
45_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/diffusers/en/quantization/overview/#quantization
.md
store and reduces memory-usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.
45_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/diffusers/en/quantization/overview/#quantization
.md
<Tip> Interested in adding a new quantization method to Diffusers? Refer to the [Contribute new quantization method guide](https://huggingface.co/docs/transformers/main/en/quantization/contribute) to learn more about adding a new quantization method. </Tip> <Tip> If you are new to the quantization field, we recommend you to check out these beginner-friendly courses about quantization in collaboration with DeepLearning.AI:
45_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/diffusers/en/quantization/overview/#quantization
.md
* [Quantization Fundamentals with Hugging Face](https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/) * [Quantization in Depth](https://www.deeplearning.ai/short-courses/quantization-in-depth/) </Tip>
45_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/diffusers/en/quantization/overview/#when-to-use-what
.md
Diffusers currently supports the following quantization methods. - [BitsandBytes](./bitsandbytes) - [TorchAO](./torchao) - [GGUF](./gguf) [This resource](https://huggingface.co/docs/transformers/main/en/quantization/overview#when-to-use-what) provides a good overview of the pros and cons of different quantization techniques.
45_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/gguf.md
https://huggingface.co/docs/diffusers/en/quantization/gguf/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
46_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/gguf.md
https://huggingface.co/docs/diffusers/en/quantization/gguf/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
46_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/gguf.md
https://huggingface.co/docs/diffusers/en/quantization/gguf/#gguf
.md
The GGUF file format is typically used to store models for inference with [GGML](https://github.com/ggerganov/ggml) and supports a variety of block wise quantization options. Diffusers supports loading checkpoints prequantized and saved in the GGUF format via `from_single_file` loading with Model classes. Loading GGUF checkpoints via Pipelines is currently not supported.
46_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/gguf.md
https://huggingface.co/docs/diffusers/en/quantization/gguf/#gguf
.md
The following example will load the [FLUX.1 DEV](https://huggingface.co/black-forest-labs/FLUX.1-dev) transformer model using the GGUF Q2_K quantization variant. Before starting please install gguf in your environment ```shell pip install -U gguf ``` Since GGUF is a single file format, use [`~FromSingleFileMixin.from_single_file`] to load the model and pass in the [`GGUFQuantizationConfig`].
46_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/gguf.md
https://huggingface.co/docs/diffusers/en/quantization/gguf/#gguf
.md
When using GGUF checkpoints, the quantized weights remain in a low memory `dtype`(typically `torch.uint8`) and are dynamically dequantized and cast to the configured `compute_dtype` during each module's forward pass through the model. The `GGUFQuantizationConfig` allows you to set the `compute_dtype`.
46_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/gguf.md
https://huggingface.co/docs/diffusers/en/quantization/gguf/#gguf
.md
The functions used for dynamic dequantizatation are based on the great work done by [city96](https://github.com/city96/ComfyUI-GGUF), who created the Pytorch ports of the original [`numpy`](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/quants.py) implementation by [compilade](https://github.com/compilade). ```python import torch
46_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/gguf.md
https://huggingface.co/docs/diffusers/en/quantization/gguf/#gguf
.md
from diffusers import FluxPipeline, FluxTransformer2DModel, GGUFQuantizationConfig
46_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/gguf.md
https://huggingface.co/docs/diffusers/en/quantization/gguf/#gguf
.md
ckpt_path = ( "https://huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q2_K.gguf" ) transformer = FluxTransformer2DModel.from_single_file( ckpt_path, quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=torch.bfloat16, ) pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", transformer=transformer, torch_dtype=torch.bfloat16, ) pipe.enable_model_cpu_offload() prompt = "A cat holding a sign that says hello world"
46_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/gguf.md
https://huggingface.co/docs/diffusers/en/quantization/gguf/#gguf
.md
torch_dtype=torch.bfloat16, ) pipe.enable_model_cpu_offload() prompt = "A cat holding a sign that says hello world" image = pipe(prompt, generator=torch.manual_seed(0)).images[0] image.save("flux-gguf.png") ```
46_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quantization/gguf.md
https://huggingface.co/docs/diffusers/en/quantization/gguf/#supported-quantization-types
.md
- BF16 - Q4_0 - Q4_1 - Q5_0 - Q5_1 - Q8_0 - Q2_K - Q3_K - Q4_K - Q5_K - Q6_K
46_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
47_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
47_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#shap-e
.md
[[open-in-colab]] Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: 1. an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset
47_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#shap-e
.md
2. a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: ```py # uncomment to install the necessary libraries in Colab #!pip install -q diffusers transformers accelerate trimesh ```
47_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#text-to-3d
.md
To generate a gif of a 3D object, pass a text prompt to the [`ShapEPipeline`]. The pipeline generates a list of image frames which are used to create the 3D object. ```py import torch from diffusers import ShapEPipeline device = torch.device("cuda" if torch.cuda.is_available() else "cpu") pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") pipe = pipe.to(device) guidance_scale = 15.0 prompt = ["A firecracker", "A birthday cupcake"]
47_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#text-to-3d
.md
guidance_scale = 15.0 prompt = ["A firecracker", "A birthday cupcake"] images = pipe( prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, ).images ``` 이제 [`~utils.export_to_gif`] 함수를 사용해 이미지 프레임 리스트를 3D 오브젝트의 gif로 변환합니다. ```py from diffusers.utils import export_to_gif
47_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#text-to-3d
.md
export_to_gif(images[0], "firecracker_3d.gif") export_to_gif(images[1], "cake_3d.gif") ``` <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/firecracker_out.gif"/> <figcaption class="mt-2 text-center text-sm text-gray-500">prompt = "A firecracker"</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/cake_out.gif"/>
47_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#text-to-3d
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">prompt = "A birthday cupcake"</figcaption> </div> </div>
47_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#image-to-3d
.md
To generate a 3D object from another image, use the [`ShapEImg2ImgPipeline`]. You can use an existing image or generate an entirely new one. Let's use the [Kandinsky 2.1](../api/pipelines/kandinsky) model to generate a new image. ```py from diffusers import DiffusionPipeline import torch
47_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#image-to-3d
.md
prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") prompt = "A cheeseburger, white background"
47_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#image-to-3d
.md
prompt = "A cheeseburger, white background" image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() image = pipeline( prompt, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, ).images[0] image.save("burger.png") ``` Pass the cheeseburger to the [`ShapEImg2ImgPipeline`] to generate a 3D representation of it. ```py from PIL import Image from diffusers import ShapEImg2ImgPipeline from diffusers.utils import export_to_gif
47_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#image-to-3d
.md
pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda") guidance_scale = 3.0 image = Image.open("burger.png").resize((256, 256)) images = pipe( image, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, ).images
47_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#image-to-3d
.md
gif_path = export_to_gif(images[0], "burger_3d.gif") ``` <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/burger_in.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">cheeseburger</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/burger_out.gif"/>
47_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#image-to-3d
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">3D cheeseburger</figcaption> </div> </div>
47_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#generate-mesh
.md
Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you'll convert the output into a `glb` file because the 🤗 Datasets library supports mesh visualization of `glb` files which can be rendered by the [Dataset viewer](https://huggingface.co/docs/hub/datasets-viewer#dataset-preview). You can generate mesh outputs for both the [`ShapEPipeline`] and [`ShapEImg2ImgPipeline`] by specifying the `output_type` parameter as `"mesh"`:
47_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#generate-mesh
.md
```py import torch from diffusers import ShapEPipeline
47_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#generate-mesh
.md
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") pipe = pipe.to(device) guidance_scale = 15.0 prompt = "A birthday cupcake"
47_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#generate-mesh
.md
images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images ``` Use the [`~utils.export_to_ply`] function to save the mesh output as a `ply` file: <Tip> You can optionally save the mesh output as an `obj` file with the [`~utils.export_to_obj`] function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! </Tip> ```py from diffusers.utils import export_to_ply
47_4_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#generate-mesh
.md
ply_path = export_to_ply(images[0], "3d_cake.ply") print(f"Saved to folder: {ply_path}") ``` Then you can convert the `ply` file to a `glb` file with the trimesh library: ```py import trimesh mesh = trimesh.load("3d_cake.ply") mesh_export = mesh.export("3d_cake.glb", file_type="glb") ``` By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: ```py import trimesh import numpy as np
47_4_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/shap-e.md
https://huggingface.co/docs/diffusers/en/using-diffusers/shap-e/#generate-mesh
.md
mesh = trimesh.load("3d_cake.ply") rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) mesh = mesh.apply_transform(rot) mesh_export = mesh.export("3d_cake.glb", file_type="glb") ``` Upload the mesh file to your dataset repository to visualize it with the Dataset viewer! <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/3D-cake.gif"/> </div>
47_4_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/
.md
<!--Copyright 2024 Marigold authors and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
48_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/
.md
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
48_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
[Marigold](../api/pipelines/marigold) is a novel diffusion-based dense prediction approach, and a set of pipelines for various computer vision tasks, such as monocular depth estimation. This guide will show you how to use Marigold to obtain fast and high-quality predictions for images and videos. Each pipeline supports one Computer Vision task, which takes an input RGB image as input and produces a *prediction* of the modality of interest, such as a depth map of the input image.
48_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
Currently, the following tasks are implemented: | Pipeline | Predicted Modalities | Demos |
48_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
|---------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------:|
48_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
| [MarigoldDepthPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_depth.py) | [Depth](https://en.wikipedia.org/wiki/Depth_map), [Disparity](https://en.wikipedia.org/wiki/Binocular_disparity) | [Fast Demo (LCM)](https://huggingface.co/spaces/prs-eth/marigold-lcm), [Slow Original Demo (DDIM)](https://huggingface.co/spaces/prs-eth/marigold) |
48_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
| [MarigoldNormalsPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_normals.py) | [Surface normals](https://en.wikipedia.org/wiki/Normal_mapping) | [Fast Demo (LCM)](https://huggingface.co/spaces/prs-eth/marigold-normals-lcm) |
48_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
The original checkpoints can be found under the [PRS-ETH](https://huggingface.co/prs-eth/) Hugging Face organization. These checkpoints are meant to work with diffusers pipelines and the [original codebase](https://github.com/prs-eth/marigold). The original code can also be used to train new checkpoints.
48_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
| Checkpoint | Modality | Comment
48_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
|
48_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
|-----------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
48_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
48_1_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
| [prs-eth/marigold-v1-0](https://huggingface.co/prs-eth/marigold-v1-0) | Depth | The first Marigold Depth checkpoint, which predicts *affine-invariant depth* maps. The performance of this checkpoint in benchmarks was studied in the original [paper](https://huggingface.co/papers/2312.02145). Designed to be used with the `DDIMScheduler` at inference, it requires at least 10 steps to get reliable predictions. Affine-invariant depth prediction has a range of values in each pixel
48_1_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
requires at least 10 steps to get reliable predictions. Affine-invariant depth prediction has a range of values in each pixel between 0 (near plane) and 1 (far plane); both planes are chosen by the model as part of the inference process. See the `MarigoldImageProcessor` reference for visualization utilities. |
48_1_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
| [prs-eth/marigold-depth-lcm-v1-0](https://huggingface.co/prs-eth/marigold-depth-lcm-v1-0) | Depth | The fast Marigold Depth checkpoint, fine-tuned from `prs-eth/marigold-v1-0`. Designed to be used with the `LCMScheduler` at inference, it requires as little as 1 step to get reliable predictions. The prediction reliability saturates at 4 steps and declines after that.
48_1_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
|
48_1_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
| [prs-eth/marigold-normals-v0-1](https://huggingface.co/prs-eth/marigold-normals-v0-1) | Normals | A preview checkpoint for the Marigold Normals pipeline. Designed to be used with the `DDIMScheduler` at inference, it requires at least 10 steps to get reliable predictions. The surface normals predictions are unit-length 3D vectors with values in the range from -1 to 1. *This checkpoint will be phased out after the release of `v1-0` version.*
48_1_14
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
checkpoint will be phased out after the release of `v1-0` version.* |
48_1_15
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
| [prs-eth/marigold-normals-lcm-v0-1](https://huggingface.co/prs-eth/marigold-normals-lcm-v0-1) | Normals | The fast Marigold Normals checkpoint, fine-tuned from `prs-eth/marigold-normals-v0-1`. Designed to be used with the `LCMScheduler` at inference, it requires as little as 1 step to get reliable predictions. The prediction reliability saturates at 4 steps and declines after that. *This checkpoint will be phased out after the release of `v1-0` version.*
48_1_16
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
*This checkpoint will be phased out after the release of `v1-0` version.* |
48_1_17
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
The examples below are mostly given for depth prediction, but they can be universally applied with other supported modalities. We showcase the predictions using the same input image of Albert Einstein generated by Midjourney. This makes it easier to compare visualizations of the predictions across various modalities and checkpoints. <div class="flex gap-4" style="justify-content: center; width: 100%;"> <div style="flex: 1 1 50%; max-width: 50%;">
48_1_18
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#marigold-pipelines-for-computer-vision-tasks
.md
<div class="flex gap-4" style="justify-content: center; width: 100%;"> <div style="flex: 1 1 50%; max-width: 50%;"> <img class="rounded-xl" src="https://marigoldmonodepth.github.io/images/einstein.jpg"/> <figcaption class="mt-1 text-center text-sm text-gray-500"> Example input image for all Marigold pipelines </figcaption> </div> </div>
48_1_19
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#depth-prediction-quick-start
.md
To get the first depth prediction, load `prs-eth/marigold-depth-lcm-v1-0` checkpoint into `MarigoldDepthPipeline` pipeline, put the image through the pipeline, and save the predictions: ```python import diffusers import torch pipe = diffusers.MarigoldDepthPipeline.from_pretrained( "prs-eth/marigold-depth-lcm-v1-0", variant="fp16", torch_dtype=torch.float16 ).to("cuda") image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg") depth = pipe(image)
48_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#depth-prediction-quick-start
.md
image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg") depth = pipe(image) vis = pipe.image_processor.visualize_depth(depth.prediction) vis[0].save("einstein_depth.png")
48_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#depth-prediction-quick-start
.md
depth_16bit = pipe.image_processor.export_depth_to_16bit_png(depth.prediction) depth_16bit[0].save("einstein_depth_16bit.png") ``` The visualization function for depth [`~pipelines.marigold.marigold_image_processing.MarigoldImageProcessor.visualize_depth`] applies one of [matplotlib's colormaps](https://matplotlib.org/stable/users/explain/colors/colormaps.html) (`Spectral` by default) to map the predicted pixel values from a single-channel `[0, 1]` depth range into an RGB image.
48_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#depth-prediction-quick-start
.md
With the `Spectral` colormap, pixels with near depth are painted red, and far pixels are assigned blue color. The 16-bit PNG file stores the single channel values mapped linearly from the `[0, 1]` range into `[0, 65535]`. Below are the raw and the visualized predictions; as can be seen, dark areas (mustache) are easier to distinguish in the visualization: <div class="flex gap-4"> <div style="flex: 1 1 50%; max-width: 50%;">
48_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#depth-prediction-quick-start
.md
<div class="flex gap-4"> <div style="flex: 1 1 50%; max-width: 50%;"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_lcm_depth_16bit.png"/> <figcaption class="mt-1 text-center text-sm text-gray-500"> Predicted depth (16-bit PNG) </figcaption> </div> <div style="flex: 1 1 50%; max-width: 50%;">
48_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#depth-prediction-quick-start
.md
Predicted depth (16-bit PNG) </figcaption> </div> <div style="flex: 1 1 50%; max-width: 50%;"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_lcm_depth.png"/> <figcaption class="mt-1 text-center text-sm text-gray-500"> Predicted depth visualization (Spectral) </figcaption> </div> </div>
48_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/using-diffusers/marigold_usage.md
https://huggingface.co/docs/diffusers/en/using-diffusers/marigold_usage/#surface-normals-prediction-quick-start
.md
Load `prs-eth/marigold-normals-lcm-v0-1` checkpoint into `MarigoldNormalsPipeline` pipeline, put the image through the pipeline, and save the predictions: ```python import diffusers import torch pipe = diffusers.MarigoldNormalsPipeline.from_pretrained( "prs-eth/marigold-normals-lcm-v0-1", variant="fp16", torch_dtype=torch.float16 ).to("cuda") image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg") normals = pipe(image)
48_3_0