source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
<figcaption class="text-center text-gray-500 text-lg">Fused module</figcaption> | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) | |-------------:|-----------------:|----------------:|-------------------:|------------------:|:----------------| | 1 | 32 | 32 | 81.4899 | 80.2569 | 4.00 GB (5.05%) |
433_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
| 1 | 32 | 32 | 81.4899 | 80.2569 | 4.00 GB (5.05%) | | 1 | 64 | 64 | 1756.1 | 106.26 | 4.00 GB (5.05%) | | 1 | 128 | 128 | 2479.32 | 105.631 | 4.00 GB (5.06%) | | 1 | 256 | 256 | 1813.6 | 85.7485 | 4.01 GB (5.06%) |
433_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
| 1 | 256 | 256 | 1813.6 | 85.7485 | 4.01 GB (5.06%) | | 1 | 512 | 512 | 2848.9 | 97.701 | 4.11 GB (5.19%) | | 1 | 1024 | 1024 | 3044.35 | 87.7323 | 4.41 GB (5.57%) | | 1 | 2048 | 2048 | 2715.11 | 89.4709 | 5.57 GB (7.04%) |
433_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
| 1 | 2048 | 2048 | 2715.11 | 89.4709 | 5.57 GB (7.04%) | The speed and throughput of fused and unfused modules were also tested with the [optimum-benchmark](https://github.com/huggingface/optimum-benchmark) library. <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/fused_forward_memory_plot.png" alt="generate throughput per batch size" />
433_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
<figcaption class="mt-2 text-center text-sm text-gray-500">forward peak memory/batch size</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/fused_generate_throughput_plot.png" alt="forward latency per batch size" /> <figcaption class="mt-2 text-center text-sm text-gray-500">generate throughput/batch size</figcaption> </div> </div> </hfoption> <hfoption id="unsupported architectures">
433_2_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
</div> </div> </hfoption> <hfoption id="unsupported architectures"> For architectures that don't support fused modules yet, you need to create a custom fusing mapping to define which modules need to be fused with the `modules_to_fuse` parameter. For example, to fuse the AWQ modules of the [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) model. ```python import torch from transformers import AwqConfig, AutoModelForCausalLM
433_2_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
model_id = "TheBloke/Yi-34B-AWQ" quantization_config = AwqConfig( bits=4, fuse_max_seq_len=512, modules_to_fuse={ "attention": ["q_proj", "k_proj", "v_proj", "o_proj"], "layernorm": ["ln1", "ln2", "norm"], "mlp": ["gate_proj", "up_proj", "down_proj"], "use_alibi": False, "num_attention_heads": 56, "num_key_value_heads": 8, "hidden_size": 7168 } )
433_2_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0) ``` The parameter `modules_to_fuse` should include: - `"attention"`: The names of the attention layers to fuse in the following order: query, key, value and output projection layer. If you don't want to fuse these layers, pass an empty list.
433_2_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
- `"layernorm"`: The names of all the LayerNorm layers you want to replace with a custom fused LayerNorm. If you don't want to fuse these layers, pass an empty list. - `"mlp"`: The names of the MLP layers you want to fuse into a single MLP layer in the order: (gate (dense, layer, post-attention) / up / down layers). - `"use_alibi"`: If your model uses ALiBi positional embedding. - `"num_attention_heads"`: The number of attention heads.
433_2_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
- `"use_alibi"`: If your model uses ALiBi positional embedding. - `"num_attention_heads"`: The number of attention heads. - `"num_key_value_heads"`: The number of key value heads that should be used to implement Grouped Query Attention (GQA). If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA), otherwise GQA is used. - `"hidden_size"`: The dimension of the hidden representations. </hfoption>
433_2_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
- `"hidden_size"`: The dimension of the hidden representations. </hfoption> </hfoptions>
433_2_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#exllama-v2-support
.md
Recent versions of `autoawq` supports ExLlama-v2 kernels for faster prefill and decoding. To get started, first install the latest version of `autoawq` by running: ```bash pip install git+https://github.com/casper-hansen/AutoAWQ.git ``` Get started by passing an `AwqConfig()` with `version="exllama"`. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig quantization_config = AwqConfig(version="exllama")
433_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#exllama-v2-support
.md
quantization_config = AwqConfig(version="exllama") model = AutoModelForCausalLM.from_pretrained( "TheBloke/Mistral-7B-Instruct-v0.1-AWQ", quantization_config=quantization_config, device_map="auto", ) input_ids = torch.randint(0, 100, (1, 128), dtype=torch.long, device="cuda") output = model(input_ids) print(output.logits)
433_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#exllama-v2-support
.md
tokenizer = AutoTokenizer.from_pretrained("TheBloke/Mistral-7B-Instruct-v0.1-AWQ") input_ids = tokenizer.encode("How to make a cake", return_tensors="pt").to(model.device) output = model.generate(input_ids, do_sample=True, max_length=50, pad_token_id=50256) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` <Tip warning={true}> Note this feature is supported on AMD GPUs. </Tip>
433_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#cpu-support
.md
Recent versions of `autoawq` supports CPU with ipex op optimizations. To get started, first install the latest version of `autoawq` by running: ```bash pip install intel-extension-for-pytorch pip install git+https://github.com/casper-hansen/AutoAWQ.git ``` Get started by passing an `AwqConfig()` with `version="ipex"`. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig quantization_config = AwqConfig(version="ipex")
433_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#cpu-support
.md
quantization_config = AwqConfig(version="ipex") model = AutoModelForCausalLM.from_pretrained( "TheBloke/TinyLlama-1.1B-Chat-v0.3-AWQ", quantization_config=quantization_config, device_map="cpu", ) input_ids = torch.randint(0, 100, (1, 128), dtype=torch.long, device="cpu") output = model(input_ids) print(output.logits)
433_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#cpu-support
.md
tokenizer = AutoTokenizer.from_pretrained("TheBloke/TinyLlama-1.1B-Chat-v0.3-AWQ") input_ids = tokenizer.encode("How to make a cake", return_tensors="pt") pad_token_id = tokenizer.eos_token_id output = model.generate(input_ids, do_sample=True, max_length=50, pad_token_id=pad_token_id) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` <Tip warning={true}> Note this feature is supported on Intel CPUs. </Tip>
433_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/eetq.md
https://huggingface.co/docs/transformers/en/quantization/eetq/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
434_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/eetq.md
https://huggingface.co/docs/transformers/en/quantization/eetq/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
434_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/eetq.md
https://huggingface.co/docs/transformers/en/quantization/eetq/#eetq
.md
The [EETQ](https://github.com/NetEase-FuXi/EETQ) library supports int8 per-channel weight-only quantization for NVIDIA GPUS. The high-performance GEMM and GEMV kernels are from FasterTransformer and TensorRT-LLM. It requires no calibration dataset and does not need to pre-quantize your model. Moreover, the accuracy degradation is negligible owing to the per-channel quantization. Make sure you have eetq installed from the [release page](https://github.com/NetEase-FuXi/EETQ/releases) ```
434_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/eetq.md
https://huggingface.co/docs/transformers/en/quantization/eetq/#eetq
.md
Make sure you have eetq installed from the [release page](https://github.com/NetEase-FuXi/EETQ/releases) ``` pip install --no-cache-dir https://github.com/NetEase-FuXi/EETQ/releases/download/v1.0.0/EETQ-1.0.0+cu121+torch2.1.2-cp310-cp310-linux_x86_64.whl ``` or via the source code https://github.com/NetEase-FuXi/EETQ. EETQ requires CUDA capability <= 8.9 and >= 7.0 ``` git clone https://github.com/NetEase-FuXi/EETQ.git cd EETQ/ git submodule update --init --recursive pip install . ```
434_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/eetq.md
https://huggingface.co/docs/transformers/en/quantization/eetq/#eetq
.md
``` git clone https://github.com/NetEase-FuXi/EETQ.git cd EETQ/ git submodule update --init --recursive pip install . ``` An unquantized model can be quantized via "from_pretrained". ```py from transformers import AutoModelForCausalLM, EetqConfig path = "/path/to/model" quantization_config = EetqConfig("int8") model = AutoModelForCausalLM.from_pretrained(path, device_map="auto", quantization_config=quantization_config) ```
434_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/eetq.md
https://huggingface.co/docs/transformers/en/quantization/eetq/#eetq
.md
model = AutoModelForCausalLM.from_pretrained(path, device_map="auto", quantization_config=quantization_config) ``` A quantized model can be saved via "saved_pretrained" and be reused again via the "from_pretrained". ```py quant_path = "/path/to/save/quantized/model" model.save_pretrained(quant_path) model = AutoModelForCausalLM.from_pretrained(quant_path, device_map="auto") ```
434_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
435_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
435_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
[bitsandbytes](https://github.com/TimDettmers/bitsandbytes) is the easiest option for quantizing a model to 8 and 4-bit. 8-bit quantization multiplies outliers in fp16 with non-outliers in int8, converts the non-outlier values back to fp16, and then adds them together to return the weights in fp16. This reduces the degradative effect outlier values have on a model's performance. 4-bit quantization compresses a model even further, and it is commonly used with [QLoRA](https://hf.co/papers/2305.14314) to
435_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
4-bit quantization compresses a model even further, and it is commonly used with [QLoRA](https://hf.co/papers/2305.14314) to finetune quantized LLMs.
435_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
To use bitsandbytes, make sure you have the following libraries installed: <hfoptions id="bnb"> <hfoption id="8-bit"> ```bash pip install transformers accelerate bitsandbytes>0.37.0 ``` </hfoption> <hfoption id="4-bit"> ```bash pip install bitsandbytes>=0.39.0 pip install --upgrade accelerate transformers ``` </hfoption> </hfoptions> <Tip>
435_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
```bash pip install bitsandbytes>=0.39.0 pip install --upgrade accelerate transformers ``` </hfoption> </hfoptions> <Tip> bitsandbytes is being refactored to support multiple backends beyond CUDA. Currently, ROCm (AMD GPU) and Intel CPU implementations are mature, with Intel XPU in progress and Apple Silicon support expected by Q4/Q1. For installation instructions and the latest backend updates, visit [this link](https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend).
435_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
We value your feedback to help identify bugs before the full release! Check out [these docs](https://huggingface.co/docs/bitsandbytes/main/en/non_cuda_backends) for more details and feedback links. </Tip> Now you can quantize a model by passing a `BitsAndBytesConfig` to [`~PreTrainedModel.from_pretrained`] method. This works for any model in any modality, as long as it supports loading with Accelerate and contains `torch.nn.Linear` layers. <hfoptions id="bnb"> <hfoption id="8-bit">
435_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
<hfoptions id="bnb"> <hfoption id="8-bit"> Quantizing a model in 8-bit halves the memory-usage, and for large models, set `device_map="auto"` to efficiently use the GPUs available: ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig
435_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
435_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
model_8bit = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-1b7", quantization_config=quantization_config ) ``` By default, all the other modules such as `torch.nn.LayerNorm` are converted to `torch.float16`. You can change the data type of these modules with the `torch_dtype` parameter if you want. Setting `torch_dtype="auto"` loads the model in the data type defined in a model's `config.json` file. ```py import torch from transformers import AutoModelForCausalLM, BitsAndBytesConfig
435_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
435_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
model_8bit = AutoModelForCausalLM.from_pretrained( "facebook/opt-350m", quantization_config=quantization_config, torch_dtype="auto" ) model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype ```
435_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
torch_dtype="auto" ) model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype ``` Once a model is quantized to 8-bit, you can't push the quantized weights to the Hub unless you're using the latest version of Transformers and bitsandbytes. If you have the latest versions, then you can push the 8-bit model to the Hub with the [`~PreTrainedModel.push_to_hub`] method. The quantization config.json file is pushed first, followed by the quantized model weights. ```py
435_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
```py from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
435_1_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-560m", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
435_1_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
model.push_to_hub("bloom-560m-8bit") ``` </hfoption> <hfoption id="4-bit"> Quantizing a model in 4-bit reduces your memory-usage by 4x, and for large models, set `device_map="auto"` to efficiently use the GPUs available: ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True)
435_1_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
model_4bit = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-1b7", quantization_config=quantization_config ) ``` By default, all the other modules such as `torch.nn.LayerNorm` are converted to `torch.float16`. You can change the data type of these modules with the `torch_dtype` parameter if you want. Setting `torch_dtype="auto"` loads the model in the data type defined in a model's `config.json` file. ```py import torch from transformers import AutoModelForCausalLM, BitsAndBytesConfig
435_1_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
435_1_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
model_4bit = AutoModelForCausalLM.from_pretrained( "facebook/opt-350m", quantization_config=quantization_config, torch_dtype="auto" ) model_4bit.model.decoder.layers[-1].final_layer_norm.weight.dtype ``` If you have `bitsandbytes>=0.41.3`, you can serialize 4-bit models and push them on Hugging Face Hub. Simply call `model.push_to_hub()` after loading it in 4-bit precision. You can also save the serialized 4-bit models locally with `model.save_pretrained()` command. </hfoption> </hfoptions>
435_1_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
</hfoption> </hfoptions> <Tip warning={true}> Training with 8-bit and 4-bit weights are only supported for training *extra* parameters. </Tip> You can check your memory footprint with the `get_memory_footprint` method: ```py print(model.get_memory_footprint()) ``` Quantized models can be loaded from the [`~PreTrainedModel.from_pretrained`] method without needing to specify the `load_in_8bit` or `load_in_4bit` parameters: ```py from transformers import AutoModelForCausalLM, AutoTokenizer
435_1_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
.md
model = AutoModelForCausalLM.from_pretrained("{your_username}/bloom-560m-8bit", device_map="auto") ```
435_1_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#8-bit-llmint8-algorithm
.md
<Tip> Learn more about the details of 8-bit quantization in this [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration)! </Tip> This section explores some of the specific features of 8-bit models, such as offloading, outlier thresholds, skipping module conversion, and finetuning.
435_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#offloading
.md
8-bit models can offload weights between the CPU and GPU to support fitting very large models into memory. The weights dispatched to the CPU are actually stored in **float32**, and aren't converted to 8-bit. For example, to enable offloading for the [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) model, start by creating a [`BitsAndBytesConfig`]: ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig
435_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#offloading
.md
quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True) ``` Design a custom device map to fit everything on your GPU except for the `lm_head`, which you'll dispatch to the CPU: ```py device_map = { "transformer.word_embeddings": 0, "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h": 0, "transformer.ln_f": 0, } ``` Now load your model with the custom `device_map` and `quantization_config`: ```py model_8bit = AutoModelForCausalLM.from_pretrained(
435_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#offloading
.md
```py model_8bit = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-1b7", torch_dtype="auto", device_map=device_map, quantization_config=quantization_config, ) ```
435_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#outlier-threshold
.md
An "outlier" is a hidden state value greater than a certain threshold, and these values are computed in fp16. While the values are usually normally distributed ([-3.5, 3.5]), this distribution can be very different for large models ([-60, 6] or [6, 60]). 8-bit quantization works well for values ~5, but beyond that, there is a significant performance penalty. A good default threshold value is 6, but a lower threshold may be needed for more unstable models (small models or finetuning).
435_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#outlier-threshold
.md
To find the best threshold for your model, we recommend experimenting with the `llm_int8_threshold` parameter in [`BitsAndBytesConfig`]: ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig
435_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#outlier-threshold
.md
model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_threshold=10, ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype="auto", device_map=device_map, quantization_config=quantization_config, ) ```
435_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#skip-module-conversion
.md
For some models, like [Jukebox](model_doc/jukebox), you don't need to quantize every module to 8-bit which can actually cause instability. With Jukebox, there are several `lm_head` modules that should be skipped using the `llm_int8_skip_modules` parameter in [`BitsAndBytesConfig`]: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_skip_modules=["lm_head"], )
435_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#skip-module-conversion
.md
model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_skip_modules=["lm_head"], ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype="auto", device_map="auto", quantization_config=quantization_config, ) ```
435_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#finetuning
.md
With the [PEFT](https://github.com/huggingface/peft) library, you can finetune large models like [flan-t5-large](https://huggingface.co/google/flan-t5-large) and [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) with 8-bit quantization. You don't need to pass the `device_map` parameter for training because it'll automatically load your model on a GPU. However, you can still customize the device map with the `device_map` parameter if you want to (`device_map="auto"` should only be used for
435_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#finetuning
.md
can still customize the device map with the `device_map` parameter if you want to (`device_map="auto"` should only be used for inference).
435_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#4-bit-qlora-algorithm
.md
<Tip> Try 4-bit quantization in this [notebook](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf) and learn more about it's details in this [blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes). </Tip> This section explores some of the specific features of 4-bit models, such as changing the compute data type, using the Normal Float 4 (NF4) data type, and using nested quantization.
435_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#compute-data-type
.md
To speedup computation, you can change the data type from float32 (the default value) to bf16 using the `bnb_4bit_compute_dtype` parameter in [`BitsAndBytesConfig`]: ```py import torch from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16) ```
435_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#normal-float-4-nf4
.md
NF4 is a 4-bit data type from the [QLoRA](https://hf.co/papers/2305.14314) paper, adapted for weights initialized from a normal distribution. You should use NF4 for training 4-bit base models. This can be configured with the `bnb_4bit_quant_type` parameter in the [`BitsAndBytesConfig`]: ```py from transformers import BitsAndBytesConfig nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", )
435_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#normal-float-4-nf4
.md
nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", ) model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", quantization_config=nf4_config) ``` For inference, the `bnb_4bit_quant_type` does not have a huge impact on performance. However, to remain consistent with the model weights, you should use the `bnb_4bit_compute_dtype` and `torch_dtype` values.
435_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#nested-quantization
.md
Nested quantization is a technique that can save additional memory at no additional performance cost. This feature performs a second quantization of the already quantized weights to save an additional 0.4 bits/parameter. For example, with nested quantization, you can finetune a [Llama-13b](https://huggingface.co/meta-llama/Llama-2-13b) model on a 16GB NVIDIA T4 GPU with a sequence length of 1024, a batch size of 1, and enabling gradient accumulation with 4 steps. ```py
435_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#nested-quantization
.md
```py from transformers import BitsAndBytesConfig
435_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#nested-quantization
.md
double_quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) model_double_quant = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b", torch_dtype="auto", quantization_config=double_quant_config) ```
435_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#dequantizing-bitsandbytes-models
.md
Once quantized, you can dequantize the model to the original precision but this might result in a small quality loss of the model. Make sure you have enough GPU RAM to fit the dequantized model. ```python from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer model_id = "facebook/opt-125m" model = AutoModelForCausalLM.from_pretrained(model_id, BitsAndBytesConfig(load_in_4bit=True)) tokenizer = AutoTokenizer.from_pretrained(model_id) model.dequantize()
435_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#dequantizing-bitsandbytes-models
.md
model.dequantize() text = tokenizer("Hello my name is", return_tensors="pt").to(0) out = model.generate(**text) print(tokenizer.decode(out[0])) ```
435_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
436_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
436_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#torchao
.md
[TorchAO](https://github.com/pytorch/ao) is an architecture optimization library for PyTorch, it provides high performance dtypes, optimization techniques and kernels for inference and training, featuring composability with native PyTorch features like `torch.compile`, FSDP etc.. Some benchmark numbers can be found [here](https://github.com/pytorch/ao/tree/main/torchao/quantization#benchmarks). Before you begin, make sure the following libraries are installed with their latest version: ```bash
436_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#torchao
.md
Before you begin, make sure the following libraries are installed with their latest version: ```bash # Updating 🤗 Transformers to the latest version, as the example script below uses the new auto compilation pip install --upgrade torch torchao transformers ```
436_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#torchao
.md
pip install --upgrade torch torchao transformers ``` By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set `torch_dtype="auto"` to load the weights in the data type defined in a model's `config.json` file to automatically load the most memory-optimal data type. ```py import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer
436_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#torchao
.md
model_name = "meta-llama/Meta-Llama-3-8B" # We support int4_weight_only, int8_weight_only and int8_dynamic_activation_int8_weight # More examples and documentations for arguments can be found in https://github.com/pytorch/ao/tree/main/torchao/quantization#other-available-quantization-techniques quantization_config = TorchAoConfig("int4_weight_only", group_size=128)
436_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#torchao
.md
quantization_config = TorchAoConfig("int4_weight_only", group_size=128) quantized_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto", quantization_config=quantization_config)
436_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#torchao
.md
tokenizer = AutoTokenizer.from_pretrained(model_name) input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") # auto-compile the quantized model with `cache_implementation="static"` to get speedup output = quantized_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True)) # benchmark the performance import torch.utils.benchmark as benchmark
436_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#torchao
.md
# benchmark the performance import torch.utils.benchmark as benchmark def benchmark_fn(f, *args, **kwargs): # Manual warmup for _ in range(5): f(*args, **kwargs) t0 = benchmark.Timer( stmt="f(*args, **kwargs)", globals={"args": args, "kwargs": kwargs, "f": f}, num_threads=torch.get_num_threads(), ) return f"{(t0.blocked_autorange().mean):.3f}"
436_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#torchao
.md
MAX_NEW_TOKENS = 1000 print("int4wo-128 model:", benchmark_fn(quantized_model.generate, **input_ids, max_new_tokens=MAX_NEW_TOKENS, cache_implementation="static"))
436_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#torchao
.md
bf16_model = AutoModelForCausalLM.from_pretrained(model_name, device_map="cuda", torch_dtype=torch.bfloat16) output = bf16_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") # auto-compile print("bf16 model:", benchmark_fn(bf16_model.generate, **input_ids, max_new_tokens=MAX_NEW_TOKENS, cache_implementation="static")) ```
436_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#serialization-and-deserialization
.md
torchao quantization is implemented with [tensor subclasses](https://pytorch.org/docs/stable/notes/extending.html#subclassing-torch-tensor), it only work with huggingface non-safetensor serialization and deserialization. It relies on `torch.load(..., weights_only=True)` to avoid arbitrary user code execution during load time and use [add_safe_globals](https://pytorch.org/docs/stable/notes/serialization.html#torch.serialization.add_safe_globals) to allowlist some known user functions.
436_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#serialization-and-deserialization
.md
The reason why it does not support safe tensor serialization is that wrapper tensor subclass allows maximum flexibility so we want to make sure the effort of supporting new format of quantized Tensor is low, while safe tensor optimizes for maximum safety (no user code execution), it also means we have to make sure to manually support new quantization format. ```py # save quantized model locally output_dir = "llama3-8b-int4wo-128" quantized_model.save_pretrained(output_dir, safe_serialization=False)
436_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#serialization-and-deserialization
.md
# push to huggingface hub # save_to = "{user_id}/llama3-8b-int4wo-128" # quantized_model.push_to_hub(save_to, safe_serialization=False) # load quantized model ckpt_id = "llama3-8b-int4wo-128" # or huggingface hub model id loaded_quantized_model = AutoModelForCausalLM.from_pretrained(ckpt_id, device_map="cuda")
436_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#serialization-and-deserialization
.md
# confirm the speedup loaded_quantized_model = torch.compile(loaded_quantized_model, mode="max-autotune") print("loaded int4wo-128 model:", benchmark_fn(loaded_quantized_model.generate, **input_ids, max_new_tokens=MAX_NEW_TOKENS)) ```
436_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
437_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
437_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
<Tip> Try GPTQ quantization with PEFT in this [notebook](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) and learn more about it's details in this [blog post](https://huggingface.co/blog/gptq-integration)! </Tip>
437_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
Both [GPTQModel](https://github.com/ModelCloud/GPTQModel) and [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) libraries implement the GPTQ algorithm, a post-training quantization technique where each row of the weight matrix is quantized independently to find a version of the weights that minimizes error. These weights are quantized to int4, stored as int32 (int4 x 8) and dequantized (restored) to fp16 on the fly during inference. This can save memory by almost 4x because the int4 weights are often
437_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
(restored) to fp16 on the fly during inference. This can save memory by almost 4x because the int4 weights are often dequantized in a fused kernel. You can also expect a substantial speedup in inference due to lower bandwidth requirements for lower bitwidth.
437_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
[GPTQModel](https://github.com/ModelCloud/GPTQModel) started as a maintained fork of AutoGPTQ but has since differentiated itself with the following major differences. * Model support: GPTQModel continues to support all of the latest LLM models. * Multimodal support: GPTQModel supports accurate quantization of Qwen 2-VL and Ovis 1.6-VL image-to-text models. * Platform support: Linux, macOS (Apple Silicon), and Windows 11.
437_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
* Platform support: Linux, macOS (Apple Silicon), and Windows 11. * Hardware support: NVIDIA CUDA, AMD ROCm, Apple Silicon M1/MPS /CPU, Intel/AMD CPU, and Intel Datacenter Max/Arc GPUs. * Asymmetric support: Asymmetric quantization can potentially introduce lower quantization errors compared to symmetric quantization. However, it is not backward compatible with AutoGPTQ, and not all kernels, such as Marlin, support asymmetric quantization.
437_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
* IPEX kernel for Intel/AMD accelerated CPU and Intel GPU (Datacenter Max/Arc GPUs) support. * Updated Marlin kernel from Neural Magic optimized for A100 (Ampere). * Updated kernels with auto-padding for legacy model support and models with non-uniform in/out-features. * Faster quantization, lower memory usage, and more accurate default quantization via GPTQModel quantization APIs. * User and developer friendly APIs.
437_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
* User and developer friendly APIs. [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) will likely be deprecated in the future due the lack of continued support for new models and features. Before you begin, make sure the following libraries are installed and updated to the latest release: ```bash pip install --upgrade accelerate optimum transformers ``` Then install either GPTQModel or AutoGPTQ. ```bash pip install gptqmodel --no-build-isolation ``` or ```bash
437_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
``` Then install either GPTQModel or AutoGPTQ. ```bash pip install gptqmodel --no-build-isolation ``` or ```bash pip install auto-gptq --no-build-isolation ``` To quantize a model (currently only supported for text models), you need to create a [`GPTQConfig`] class and set the number of bits to quantize to, a dataset to calibrate the weights for quantization, and a tokenizer to prepare the dataset. ```py from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
437_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
model_id = "facebook/opt-125m" tokenizer = AutoTokenizer.from_pretrained(model_id) gptq_config = GPTQConfig(bits=4, dataset="c4", tokenizer=tokenizer) ``` You could also pass your own dataset as a list of strings, but it is highly recommended to use the same dataset from the GPTQ paper. ```py dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."] gptq_config = GPTQConfig(bits=4, dataset=dataset, tokenizer=tokenizer) ```
437_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
gptq_config = GPTQConfig(bits=4, dataset=dataset, tokenizer=tokenizer) ``` Load a model to quantize and pass the `gptq_config` to the [`~AutoModelForCausalLM.from_pretrained`] method. Set `device_map="auto"` to automatically offload the model to a CPU to help fit the model in memory, and allow the model modules to be moved between the CPU and GPU for quantization. ```py quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config) ```
437_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config) ``` If you're running out of memory because a dataset is too large, disk offloading is not supported. If this is the case, try passing the `max_memory` parameter to allocate the amount of memory to use on your device (GPU and CPU): ```py
437_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
```py quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", max_memory={0: "30GiB", 1: "46GiB", "cpu": "30GiB"}, quantization_config=gptq_config) ``` <Tip warning={true}>
437_1_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
``` <Tip warning={true}> Depending on your hardware, it can take some time to quantize a model from scratch. It can take ~5 minutes to quantize the [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) model on a free-tier Google Colab GPU, but it'll take ~4 hours to quantize a 175B parameter model on a NVIDIA A100. Before you quantize a model, it is a good idea to check the Hub if a GPTQ-quantized version of the model already exists. </Tip>
437_1_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
</Tip> Once your model is quantized, you can push the model and tokenizer to the Hub where it can be easily shared and accessed. Use the [`~PreTrainedModel.push_to_hub`] method to save the [`GPTQConfig`]: ```py quantized_model.push_to_hub("opt-125m-gptq") tokenizer.push_to_hub("opt-125m-gptq") ```
437_1_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
```py quantized_model.push_to_hub("opt-125m-gptq") tokenizer.push_to_hub("opt-125m-gptq") ``` You could also save your quantized model locally with the [`~PreTrainedModel.save_pretrained`] method. If the model was quantized with the `device_map` parameter, make sure to move the entire model to a GPU or CPU before saving it. For example, to save the model on a CPU: ```py quantized_model.save_pretrained("opt-125m-gptq") tokenizer.save_pretrained("opt-125m-gptq")
437_1_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
.md
# if quantized with device_map set quantized_model.to("cpu") quantized_model.save_pretrained("opt-125m-gptq") ``` Reload a quantized model with the [`~PreTrainedModel.from_pretrained`] method, and set `device_map="auto"` to automatically distribute the model on all available GPUs to load the model faster without using more memory than needed. ```py from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto") ```
437_1_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#marlin
.md
[Marlin](https://github.com/IST-DASLab/marlin) is a 4-bit only CUDA GPTQ kernel, highly optimized for the NVIDIA A100 GPU (Ampere) architecture. Loading, dequantization, and execution of post-dequantized weights are highly parallelized, offering a substantial inference improvement versus the original CUDA GPTQ kernel. Marlin is only available for quantized inference and does not support model quantization. Marlin inference can be activated with the `backend` parameter in [`GPTQConfig`]. ```py
437_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#marlin
.md
from transformers import AutoModelForCausalLM, GPTQConfig model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config=GPTQConfig(bits=4, backend="marlin")) ```
437_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#exllama
.md
[ExLlama](https://github.com/turboderp/exllama) is a CUDA implementation of the [Llama](model_doc/llama) model that is designed for faster inference with 4-bit GPTQ weights (check out these [benchmarks](https://github.com/huggingface/optimum/tree/main/tests/benchmark#gptq-benchmark)). The ExLlama kernel is activated by default when you create a [`GPTQConfig`] object. To boost inference speed even further, use the [ExLlamaV2](https://github.com/turboderp/exllamav2) kernels by configuring the `exllama_config`
437_3_0