source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#exllama
.md
speed even further, use the [ExLlamaV2](https://github.com/turboderp/exllamav2) kernels by configuring the `exllama_config` parameter:
437_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#exllama
.md
```py import torch from transformers import AutoModelForCausalLM, GPTQConfig
437_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#exllama
.md
gptq_config = GPTQConfig(bits=4, exllama_config={"version":2}) model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config=gptq_config) ``` <Tip warning={true}> Only 4-bit models are supported, and we recommend deactivating the ExLlama kernels if you're finetuning a quantized model with PEFT. </Tip>
437_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#exllama
.md
</Tip> The ExLlama kernels are only supported when the entire model is on the GPU. If you're doing inference on a CPU with AutoGPTQ or GPTQModel, then you'll need to disable the ExLlama kernel. This overwrites the attributes related to the ExLlama kernels in the quantization config of the config.json file. ```py import torch from transformers import AutoModelForCausalLM, GPTQConfig gptq_config = GPTQConfig(bits=4, use_exllama=False)
437_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#exllama
.md
import torch from transformers import AutoModelForCausalLM, GPTQConfig gptq_config = GPTQConfig(bits=4, use_exllama=False) model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="cpu", quantization_config=gptq_config) ```
437_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
438_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
438_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#quantization
.md
Quantization techniques focus on representing data with less information while also trying to not lose too much accuracy. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory-usage. Lower precision can also speedup inference because it takes less time to perform calculations
438_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#quantization
.md
store and reduces memory-usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.
438_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#quantization
.md
<Tip> Interested in adding a new quantization method to Transformers? Read the [HfQuantizer](./contribute) guide to learn how! </Tip> <Tip> If you are new to the quantization field, we recommend you to check out these beginner-friendly courses about quantization in collaboration with DeepLearning.AI: * [Quantization Fundamentals with Hugging Face](https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/)
438_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#quantization
.md
* [Quantization in Depth](https://www.deeplearning.ai/short-courses/quantization-in-depth/) </Tip>
438_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
The community has developed many quantization methods for various use cases. With Transformers, you can run any of these integrated methods depending on your use case because each method has their own pros and cons. For example, some quantization methods require calibrating the model with a dataset for more accurate and "extreme" compression (up to 1-2 bits quantization), while other methods work out of the box with on-the-fly quantization.
438_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
Another parameter to consider is compatibility with your target device. Do you want to quantize on a CPU, GPU, or Apple silicon? In short, supporting a wide range of quantization methods allows you to pick the best quantization method for your specific use case. Use the table below to help you decide which quantization method to use.
438_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
Use the table below to help you decide which quantization method to use. | Quantization Method | On the fly quantization | CPU | CUDA GPU | ROCm GPU | Metal (Apple Silicon) | Intel GPU | Torch compile() | Bits | PEFT Fine Tuning | Serializable with 🤗Transformers | 🤗Transformers Support | Link to library |
438_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
|-----------------------------------------------|----------------------|-----------------|----------|-----------|------------------------------------|-----------------|-----------------|---------------|------------------|-----------------------------|-------------------------|---------------------------------------------|
438_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [AQLM](./aqlm.md) | 🔴 | 🟢 | 🟢 | 🔴 | 🔴 | 🔴 | 🟢 | 1/2 | 🟢 | 🟢 | 🟢 | https://github.com/Vahe1994/AQLM |
438_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [AWQ](./awq.md) | 🔴 | 🟢 | 🟢 | 🟢 | 🔴 | 🟢 | ? | 4 | 🟢 | 🟢 | 🟢 | https://github.com/casper-hansen/AutoAWQ |
438_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [bitsandbytes](./bitsandbytes.md) | 🟢 | 🟡 <sub>1</sub> | 🟢 | 🟡 <sub>1</sub> | 🔴 <sub>2</sub> | 🟡 <sub>1</sub> | 🔴 <sub>1</sub> | 4/8 | 🟢 | 🟢 | 🟢 | https://github.com/bitsandbytes-foundation/bitsandbytes |
438_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [compressed-tensors](./compressed_tensors.md) | 🔴 | 🟢 | 🟢 | 🟢 | 🔴 | 🔴 | 🔴 | 1/8 | 🟢 | 🟢 | 🟢 | https://github.com/neuralmagic/compressed-tensors |
438_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [EETQ](./eetq.md) | 🟢 | 🔴 | 🟢 | 🔴 | 🔴 | 🔴 | ? | 8 | 🟢 | 🟢 | 🟢 | https://github.com/NetEase-FuXi/EETQ |
438_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [GGUF / GGML (llama.cpp)](../gguf.md) | 🟢 | 🟢 | 🟢 | 🔴 | 🟢 | 🔴 | 🔴 | 1/8 | 🔴 | [See Notes](../gguf.md) | [See Notes](../gguf.md) | https://github.com/ggerganov/llama.cpp |
438_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [GPTQModel](./gptq.md) | 🔴 | 🟢 <sub>3</sub> | 🟢 | 🟢 | 🟢 | 🟢 <sub>4</sub> | 🔴 | 2/3/4/8 | 🟢 | 🟢 | 🟢 | https://github.com/ModelCloud/GPTQModel |
438_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [AutoGPTQ](./gptq.md) | 🔴 | 🔴 | 🟢 | 🟢 | 🔴 | 🔴 | 🔴 | 2/3/4/8 | 🟢 | 🟢 | 🟢 | https://github.com/AutoGPTQ/AutoGPTQ |
438_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [HIGGS](./higgs.md) | 🟢 | 🔴 | 🟢 | 🔴 | 🔴 | 🔴 | 🟢 | 2/4 | 🔴 | 🟢 | 🟢 | https://github.com/HanGuo97/flute |
438_2_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [HQQ](./hqq.md) | 🟢 | 🟢 | 🟢 | 🔴 | 🔴 | 🔴 | 🟢 | 1/8 | 🟢 | 🔴 | 🟢 | https://github.com/mobiusml/hqq/ |
438_2_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [optimum-quanto](./quanto.md) | 🟢 | 🟢 | 🟢 | 🔴 | 🟢 | 🔴 | 🟢 | 2/4/8 | 🔴 | 🔴 | 🟢 | https://github.com/huggingface/optimum-quanto |
438_2_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [FBGEMM_FP8](./fbgemm_fp8.md) | 🟢 | 🔴 | 🟢 | 🔴 | 🔴 | 🔴 | 🔴 | 8 | 🔴 | 🟢 | 🟢 | https://github.com/pytorch/FBGEMM |
438_2_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [torchao](./torchao.md) | 🟢 | | 🟢 | 🔴 | 🟡 <sub>5</sub> | 🔴 | | 4/8 | | 🟢🔴 | 🟢 | https://github.com/pytorch/ao |
438_2_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
| [VPTQ](./vptq.md) | 🔴 | 🔴 | 🟢 | 🟡 | 🔴 | 🔴 | 🟢 | 1/8 | 🔴 | 🟢 | 🟢 | https://github.com/microsoft/VPTQ | <Tip>
438_2_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
**1:** bitsandbytes is being refactored to support multiple backends beyond CUDA. Currently, ROCm (AMD GPU) and Intel CPU implementations are mature, with Intel XPU in progress and Apple Silicon support expected by Q4/Q1. For installation instructions and the latest backend updates, visit [this link](https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend). Check out [these docs](https://huggingface.co/docs/bitsandbytes/main/en/non_cuda_backends) for more details and feedback links.
438_2_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
</Tip> <Tip> **2:** bitsandbytes is seeking contributors to help develop and lead the Apple Silicon backend. Interested? Contact them directly via their repo. Stipends may be available through sponsorships. </Tip> <Tip> **3:** GPTQModel[CPU] supports 4-bit via IPEX on Intel/AMD and full bit range via Torch on Intel/AMD/Apple Silicon. </Tip> <Tip> **4:** GPTQModel[Intel GPU] via IPEX only supports 4-bit for Intel Datacenter Max/Arc GPUs. </Tip> <Tip>
438_2_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
.md
</Tip> <Tip> **4:** GPTQModel[Intel GPU] via IPEX only supports 4-bit for Intel Datacenter Max/Arc GPUs. </Tip> <Tip> **5:** torchao only supports int4 weight on Metal (Apple Silicon). </Tip>
438_2_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/fbgemm_fp8.md
https://huggingface.co/docs/transformers/en/quantization/fbgemm_fp8/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
439_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/fbgemm_fp8.md
https://huggingface.co/docs/transformers/en/quantization/fbgemm_fp8/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
439_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/fbgemm_fp8.md
https://huggingface.co/docs/transformers/en/quantization/fbgemm_fp8/#fbgemm-fp8
.md
With FBGEMM FP8 quantization method, you can quantize your model in FP8 (W8A8): - the weights will be quantized in 8bit (FP8) per channel - the activation will be quantized in 8bit (FP8) per token It relies on the [FBGEMM](https://github.com/pytorch/FBGEMM) library which provides efficient low-precision general matrix multiplication for small batch sizes and support for accuracy-loss minimizing techniques such as row-wise quantization and outlier-aware quantization. > [!TIP]
439_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/fbgemm_fp8.md
https://huggingface.co/docs/transformers/en/quantization/fbgemm_fp8/#fbgemm-fp8
.md
> [!TIP] > You need a GPU with compute capability>=9 (e.g. H100) Before you begin, make sure the following libraries are installed with their latest version: ```bash pip install --upgrade accelerate fbgemm-gpu torch ```
439_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/fbgemm_fp8.md
https://huggingface.co/docs/transformers/en/quantization/fbgemm_fp8/#fbgemm-fp8
.md
```bash pip install --upgrade accelerate fbgemm-gpu torch ``` If you are having issues with fbgemm-gpu and torch library, you might need to install the nightly release. You can follow the instruction [here](https://pytorch.org/FBGEMM/fbgemm_gpu-development/InstallationInstructions.html#fbgemm-gpu-install-libraries:~:text=found%20here.-,Install%20the%20FBGEMM_GPU%20Package,-Install%20through%20PyTorch)
439_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/fbgemm_fp8.md
https://huggingface.co/docs/transformers/en/quantization/fbgemm_fp8/#fbgemm-fp8
.md
By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set `torch_dtype="auto"` to load the weights in the data type defined in a model's `config.json` file to automatically load the most memory-optimal data type. ```py from transformers import FbgemmFp8Config, AutoModelForCausalLM, AutoTokenizer
439_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/fbgemm_fp8.md
https://huggingface.co/docs/transformers/en/quantization/fbgemm_fp8/#fbgemm-fp8
.md
model_name = "meta-llama/Meta-Llama-3-8B" quantization_config = FbgemmFp8Config() quantized_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto", quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_name) input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
439_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/fbgemm_fp8.md
https://huggingface.co/docs/transformers/en/quantization/fbgemm_fp8/#fbgemm-fp8
.md
output = quantized_model.generate(**input_ids, max_new_tokens=10) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` A quantized model can be saved via "saved_pretrained" and be reused again via the "from_pretrained". ```py quant_path = "/path/to/save/quantized/model" model.save_pretrained(quant_path) model = AutoModelForCausalLM.from_pretrained(quant_path, device_map="auto") ```
439_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/optimum.md
https://huggingface.co/docs/transformers/en/quantization/optimum/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
440_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/optimum.md
https://huggingface.co/docs/transformers/en/quantization/optimum/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
440_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/optimum.md
https://huggingface.co/docs/transformers/en/quantization/optimum/#optimum
.md
The [Optimum](https://huggingface.co/docs/optimum/index) library supports quantization for Intel, Furiosa, ONNX Runtime, GPTQ, and lower-level PyTorch quantization functions. Consider using Optimum for quantization if you're using specific and optimized hardware like Intel CPUs, Furiosa NPUs or a model accelerator like ONNX Runtime.
440_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
441_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
441_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/#optimum-quanto
.md
<Tip> Try optimum-quanto + transformers with this [notebook](https://colab.research.google.com/drive/16CXfVmtdQvciSh9BopZUDYcmXCDpvgrT?usp=sharing)! </Tip> [🤗 optimum-quanto](https://github.com/huggingface/optimum-quanto) library is a versatile pytorch quantization toolkit. The quantization method used is the linear quantization. Quanto provides several unique features such as: - weights quantization (`float8`,`int8`,`int4`,`int2`) - activation quantization (`float8`,`int8`)
441_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/#optimum-quanto
.md
- weights quantization (`float8`,`int8`,`int4`,`int2`) - activation quantization (`float8`,`int8`) - modality agnostic (e.g CV,LLM) - device agnostic (e.g CUDA,XPU,MPS,CPU) - compatibility with `torch.compile` - easy to add custom kernel for specific device - supports quantization aware training <!-- Add link to the blogpost --> Before you begin, make sure the following libraries are installed: ```bash pip install optimum-quanto accelerate transformers ```
441_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/#optimum-quanto
.md
```bash pip install optimum-quanto accelerate transformers ``` Now you can quantize a model by passing [`QuantoConfig`] object in the [`~PreTrainedModel.from_pretrained`] method. This works for any model in any modality, as long as it contains `torch.nn.Linear` layers.
441_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/#optimum-quanto
.md
The integration with transformers only supports weights quantization. For the more complex use case such as activation quantization, calibration and quantization aware training, you should use [optimum-quanto](https://github.com/huggingface/optimum-quanto) library instead.
441_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/#optimum-quanto
.md
By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set `torch_dtype="auto"` to load the weights in the data type defined in a model's `config.json` file to automatically load the most memory-optimal data type. ```py from transformers import AutoModelForCausalLM, AutoTokenizer, QuantoConfig
441_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/#optimum-quanto
.md
model_id = "facebook/opt-125m" tokenizer = AutoTokenizer.from_pretrained(model_id) quantization_config = QuantoConfig(weights="int8") quantized_model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="cuda:0", quantization_config=quantization_config) ``` Note that serialization is not supported yet with transformers but it is coming soon! If you want to save the model, you can use quanto library instead.
441_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/#optimum-quanto
.md
Optimum-quanto library uses linear quantization algorithm for quantization. Even though this is a basic quantization technique, we get very good results! Have a look at the following benchmark (llama-2-7b on perplexity metric). You can find more benchmarks [here](https://github.com/huggingface/optimum-quanto/tree/main/bench/generation) <div class="flex gap-4"> <div>
441_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/#optimum-quanto
.md
<div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/NousResearch-Llama-2-7b-hf_Perplexity.png" alt="llama-2-7b-quanto-perplexity" /> </div> </div> The library is versatile enough to be compatible with most PTQ optimization algorithms. The plan in the future is to integrate the most popular algorithms in the most seamless possible way (AWQ, Smoothquant).
441_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
442_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
442_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#contribute-new-quantization-method
.md
Transformers supports and integrates many quantization methods such as QLoRA, GPTQ, LLM.int8, and AWQ. However, there are other quantization approaches that are not yet integrated. To make adding and using these quantization methods with Transformers models easier, you should use the [`HfQuantizer`] class. The [`HfQuantizer`] is designed as an internal helper class for adding a quantization method instead of something you apply to every PyTorch module.
442_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#contribute-new-quantization-method
.md
This guide will show you how to integrate a new quantization method with the [`HfQuantizer`] class.
442_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#requirements
.md
Before integrating a new quantization method into Transformers, ensure the method you are trying to add meets the following prerequisites. Only quantization methods that can be run with PyTorch modules are currently supported. - The quantization method is available through a Python package that is pip-installable by anyone (it is also fine if you can only install the package from source). Ideally, pre-compiled kernels are included in the pip package.
442_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#requirements
.md
- The method can run on commonly-used hardware (CPU, GPU, ...). - The method is wrapped in a `nn.Module` (e.g., `Linear8bitLt`, `Linear4bit`), and the quantized linear layer should have the following definition: ```py class Linear4bit(nn.Module): def __init__(self, ...): ...
442_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#requirements
.md
def forward(self, x): return my_4bit_kernel(x, self.weight, self.bias) ``` This way, Transformers models can be easily quantized by replacing some instances of `nn.Linear` with a target class. - The quantization method should be serializable. You can save the quantized weights locally or push them to the Hub. - Make sure the package that contains the quantization kernels/primitive is stable (no frequent breaking changes).
442_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#requirements
.md
- Make sure the package that contains the quantization kernels/primitive is stable (no frequent breaking changes). For some quantization methods, they may require "pre-quantizing" the models through data calibration (e.g., AWQ). In this case, we prefer to only support inference in Transformers and let the third-party library maintained by the ML community deal with the model quantization itself.
442_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
1. Create a new quantization config class inside [src/transformers/utils/quantization_config.py](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/utils/quantization_config.py) and make sure to expose the new quantization config inside Transformers main `init` by adding it to the [`_import_structure`](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/__init__.py#L1088) object of
442_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
object of [src/transformers/__init__.py](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/__init__.py).
442_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
2. Create a new file inside [src/transformers/quantizers/](https://github.com/huggingface/transformers/tree/abbffc4525566a48a9733639797c812301218b83/src/transformers/quantizers) named `quantizer_your_method.py`, and make it inherit from [src/transformers/quantizers/base.py::HfQuantizer](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/quantizers/base.py#L28). Make sure to add the new quantizer and quantization config in the quantization auto-mapping
442_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
Make sure to add the new quantizer and quantization config in the quantization auto-mapping in [src/transformers/quantizers/auto.py](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/quantizers/auto.py).
442_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
3. Define the following class attributes/property methods for your quantization method: * `requires_calibration`: Whether the quantization method requires a data calibration process. If set to `True`, you can only support inference (with quantized weights) and not inference and quantization.
442_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
* `required_packages`: A list of strings of the required packages to use the quantized weights. You might need to define some new utility methods such as `is_auto_awq_available` in [transformers/src/utils/import_utils.py](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/utils/import_utils.py).
442_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
* `requires_parameters_quantization`: Only required if your quantization method requires extra attention to the underlying `nn.Parameter` object. For example, bitsandbytes uses `Params4bit` and `Int8Param`, which requires some extra attention when quantizing the model. Most of the recent quantization method packs int2/int4 weights inside `torch.uint8` weights, so this flag should not be really required (set to `False` by default).
442_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
* `is_serializable`: A property method to determine whether the method is serializable or not. * `is_trainable`: A property method to determine whether you can fine-tune models on top of the quantization method (with or without PEFT approaches). 4. Write the `validate_environment` and `update_torch_dtype` methods. These methods are called before creating the quantized model to ensure users use the right configuration. You can have a look at how this is done on other quantizers.
442_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
5. Write the `_process_model_before_weight_loading` method. In Transformers, the quantized models are initialized first on the `"meta"` device before loading the weights. This means the `_process_model_before_weight_loading` method takes care of manipulating the model skeleton to replace some modules (e.g., `nn.Linear`) with the target modules (quantization modules). You can define a module replacement logic or any other utility method by creating a new file in
442_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
modules (quantization modules). You can define a module replacement logic or any other utility method by creating a new file in [transformers/src/integrations/](https://github.com/huggingface/transformers/tree/abbffc4525566a48a9733639797c812301218b83/src/transformers/integrations) and exposing the relevant methods in that folder's `__init__.py` file. The best starting point would be to have a look at another quantization methods such as
442_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
in that folder's `__init__.py` file. The best starting point would be to have a look at another quantization methods such as [quantizer_awq.py](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/quantizers/quantizer_awq.py).
442_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
6. Write the `_process_model_after_weight_loading` method. This method enables implementing additional features that require manipulating the model after loading the weights. 7. Document everything! Make sure your quantization method is documented by adding a new file under `docs/source/en/quantization` and adding a new row in the table in `docs/source/en/quantization/overview.md`.
442_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
.md
8. Add tests! You should add tests by first adding the package in our nightly Dockerfile inside `docker/transformers-quantization-latest-gpu` and then adding a new test file in `tests/quantization/xxx`. Feel free to check out how it is implemented for other quantization methods.
442_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/hqq.md
https://huggingface.co/docs/transformers/en/quantization/hqq/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
443_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/hqq.md
https://huggingface.co/docs/transformers/en/quantization/hqq/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
443_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/hqq.md
https://huggingface.co/docs/transformers/en/quantization/hqq/#hqq
.md
Half-Quadratic Quantization (HQQ) implements on-the-fly quantization via fast robust optimization. It doesn't require calibration data and can be used to quantize any model. Please refer to the <a href="https://github.com/mobiusml/hqq/">official package</a> for more details. For installation, we recommend you use the following approach to get the latest version and build its corresponding CUDA kernels: ``` pip install hqq ```
443_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/hqq.md
https://huggingface.co/docs/transformers/en/quantization/hqq/#hqq
.md
``` pip install hqq ``` To quantize a model, you need to create an [`HqqConfig`]. There are two ways of doing it: ``` Python from transformers import AutoModelForCausalLM, AutoTokenizer, HqqConfig
443_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/hqq.md
https://huggingface.co/docs/transformers/en/quantization/hqq/#hqq
.md
# Method 1: all linear layers will use the same quantization config quant_config = HqqConfig(nbits=8, group_size=64) ``` ``` Python # Method 2: each linear layer with the same tag will use a dedicated quantization config q4_config = {'nbits':4, 'group_size':64} q3_config = {'nbits':3, 'group_size':32} quant_config = HqqConfig(dynamic_config={ 'self_attn.q_proj':q4_config, 'self_attn.k_proj':q4_config, 'self_attn.v_proj':q4_config, 'self_attn.o_proj':q4_config,
443_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/hqq.md
https://huggingface.co/docs/transformers/en/quantization/hqq/#hqq
.md
'mlp.gate_proj':q3_config, 'mlp.up_proj' :q3_config, 'mlp.down_proj':q3_config, }) ``` The second approach is especially interesting for quantizing Mixture-of-Experts (MoEs) because the experts are less affected by lower quantization settings. Then you simply quantize the model as follows ``` Python model = transformers.AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map="cuda", quantization_config=quant_config ) ```
443_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/hqq.md
https://huggingface.co/docs/transformers/en/quantization/hqq/#optimized-runtime
.md
HQQ supports various backends, including pure PyTorch and custom dequantization CUDA kernels. These backends are suitable for older gpus and peft/QLoRA training. For faster inference, HQQ supports 4-bit fused kernels (TorchAO and Marlin), reaching up to 200 tokens/sec on a single 4090. For more details on how to use the backends, please refer to https://github.com/mobiusml/hqq/?tab=readme-ov-file#backend
443_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
444_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
444_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#compressed-tensors
.md
The [`compressed-tensors`](https://github.com/neuralmagic/compressed-tensors) library provides a versatile and efficient way to store and manage compressed model checkpoints. This library supports various quantization and sparsity schemes, making it a unified format for handling different model optimizations like GPTQ, AWQ, SmoothQuant, INT8, FP8, SparseGPT, and more. Some of the supported formats include: 1. `dense`
444_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#compressed-tensors
.md
Some of the supported formats include: 1. `dense` 2. `int-quantized` ([sample](https://huggingface.co/nm-testing/tinyllama-w8a8-compressed-hf-quantizer)): INT8 quantized models 3. `float-quantized` ([sample](https://huggingface.co/nm-testing/Meta-Llama-3-8B-Instruct-fp8-hf_compat)): FP8 quantized models; currently support E4M3
444_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#compressed-tensors
.md
4. `pack-quantized` ([sample](https://huggingface.co/nm-testing/tinyllama-w4a16-compressed-hf-quantizer)): INT4 or INT8 weight-quantized models, packed into INT32. For INT4, the weights have an INT4 range but are stored as INT8 and then packed into INT32. Compressed models can be easily created using [llm-compressor](https://github.com/vllm-project/llm-compressor). Alternatively models can be created independently and serialized with a compressed tensors config.
444_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#compressed-tensors
.md
Alternatively models can be created independently and serialized with a compressed tensors config. To find existing models on the Hugging Face Model Hub, search for the [`compressed-tensors` tag](https://huggingface.co/models?other=compressed-tensors).
444_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#features
.md
- Weight and activation precisions: FP8, INT4, INT8 (for Q/DQ arbitrary precision is allowed for INT) - Quantization scales and zero-points strategies: [tensor, channel, group, block, token](https://github.com/neuralmagic/compressed-tensors/blob/83b2e7a969d70606421a76b9a3d112646077c8de/src/compressed_tensors/quantization/quant_args.py#L43-L52) - Dynamic per-token activation quantization (or any static strategy)
444_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#features
.md
- Dynamic per-token activation quantization (or any static strategy) - Sparsity in weights (unstructured or semi-structured like 2:4) can be composed with quantization for extreme compression - Supports quantization of arbitrary modules, not just Linear modules - Targeted support or ignoring of modules by name or class
444_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#installation
.md
It is recommended to install stable releases of compressed-tensors from [PyPI](https://pypi.org/project/compressed-tensors): ```bash pip install compressed-tensors ``` Developers who want to experiment with the latest features can also install the package from source: ```bash git clone https://github.com/neuralmagic/compressed-tensors cd compressed-tensors pip install -e . ```
444_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#quickstart-model-load
.md
Quantized models can be easily loaded for inference as shown below. Only models that have already been quantized can be loaded at the moment. To quantize a model into the compressed-tensors format see [llm-compressor](https://github.com/vllm-project/llm-compressor). ```python from transformers import AutoModelForCausalLM # Load the model in compressed-tensors format ct_model = AutoModelForCausalLM.from_pretrained("nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf")
444_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#quickstart-model-load
.md
# Measure memory usage mem_params = sum([param.nelement()*param.element_size() for param in ct_model.parameters()]) print(f"{mem/2**30:.4f} GB") # 8.4575 GB ``` We can see just above that the compressed-tensors FP8 checkpoint of Llama 3.1 8B is able to be loaded for inference using half of the memory of the unquantized reference checkpoint.
444_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#sample-use-cases---load-and-run-an-fp8-model
.md
```python from transformers import AutoModelForCausalLM, AutoTokenizer prompt = [ "Hello, my name is", "The capital of France is", "The future of AI is" ] model_name = "nm-testing/Meta-Llama-3-8B-Instruct-fp8-hf_compat" quantized_model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name)
444_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#sample-use-cases---load-and-run-an-fp8-model
.md
inputs = tokenizer(prompt, return_tensors="pt") generated_ids = quantized_model.generate(**inputs, max_length=50, do_sample=False) outputs = tokenizer.batch_decode(generated_ids) print(outputs)
444_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#sample-use-cases---load-and-run-an-fp8-model
.md
"""
444_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#sample-use-cases---load-and-run-an-fp8-model
.md
['<|begin_of_text|>Hello, my name is [Name]. I am a [Your Profession/Student] and I am here to learn about the [Course/Program] at [University/Institution]. I am excited to be here and I am looking forward to', '<|begin_of_text|>The capital of France is Paris, which is located in the north-central part of the country. Paris is the most populous city in France and is known for its stunning architecture, art museums, fashion, and romantic atmosphere. The city is home to', "<|begin_of_text|>The future of AI
444_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#sample-use-cases---load-and-run-an-fp8-model
.md
stunning architecture, art museums, fashion, and romantic atmosphere. The city is home to', "<|begin_of_text|>The future of AI is here, and it's already changing the way we live and work. From virtual assistants to self-driving cars, AI is transforming industries and revolutionizing the way we interact with technology. But what does the future of AI hold"]
444_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#sample-use-cases---load-and-run-an-fp8-model
.md
"""
444_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#sample-use-cases---load-and-run-an-fp8-model
.md
``` The above shows a quick example for running generation using a `compressed-tensors` model. Currently, once loaded the model cannot be saved.
444_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
In this example we will examine how the compressed-tensors model nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf is defined through its configuration entry and see how this translates to the loaded model representation.
444_6_0