source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
First, let us look at the [`quantization_config` of the model](https://huggingface.co/nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf/blob/main/config.json). At a glance it looks overwhelming with the number of entries but this is because compressed-tensors is a format that allows for flexible expression both during and after model compression.
444_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
In practice for checkpoint loading and inference the configuration can be simplified to not include all the default or empty entries, so we will do that here to focus on what compression is actually represented. ```yaml "quantization_config": { "config_groups": { "group_0": { "input_activations": { "num_bits": 8, "strategy": "tensor", "type": "float" }, "targets": ["Linear"], "weights": { "num_bits": 8, "strategy": "tensor", "type": "float" } } }, "format": "naive-quantized", "ignore": ["lm_head"],
444_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
"weights": { "num_bits": 8, "strategy": "tensor", "type": "float" } } }, "format": "naive-quantized", "ignore": ["lm_head"], "quant_method": "compressed-tensors", "quantization_status": "frozen" }, ```
444_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
"ignore": ["lm_head"], "quant_method": "compressed-tensors", "quantization_status": "frozen" }, ``` We can see from the above configuration that it is specifying one config group that includes weight and activation quantization to FP8 with a static per-tensor strategy. It is also worth noting that in the `ignore` list there is an entry to skip quantization of the `lm_head` module, so that module should be untouched in the checkpoint.
444_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
To see the result of the configuration in practice, we can simply use the [safetensors viewer](https://huggingface.co/nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf?show_file_info=model.safetensors.index.json) on the model card to see the quantized weights, input_scale, and weight_scale for all of the Linear modules in the first model layer (and so on for the rest of the layers). | Tensors | Shape |Precision | | ------- | ----- | --------- | model.layers.0.input_layernorm.weight| [4096]| BF16
444_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
| Tensors | Shape |Precision | | ------- | ----- | --------- | model.layers.0.input_layernorm.weight| [4096]| BF16 model.layers.0.mlp.down_proj.input_scale| [1]| BF16 model.layers.0.mlp.down_proj.weight| [4096, 14336] |F8_E4M3 model.layers.0.mlp.down_proj.weight_scale |[1]| BF16 model.layers.0.mlp.gate_proj.input_scale |[1]| BF16 model.layers.0.mlp.gate_proj.weight| [14336, 4096]| F8_E4M3 model.layers.0.mlp.gate_proj.weight_scale| [1] |BF16 model.layers.0.mlp.up_proj.input_scale|[1]|BF16
444_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
model.layers.0.mlp.gate_proj.weight_scale| [1] |BF16 model.layers.0.mlp.up_proj.input_scale|[1]|BF16 model.layers.0.mlp.up_proj.weight |[14336, 4096]| F8_E4M3 model.layers.0.mlp.up_proj.weight_scale | [1]| BF16 model.layers.0.post_attention_layernorm.weight |[4096]|BF16 model.layers.0.self_attn.k_proj.input_scale |[1]| BF16 model.layers.0.self_attn.k_proj.weight |[1024, 4096]|F8_E4M3 model.layers.0.self_attn.k_proj.weight_scale |[1]| BF16 model.layers.0.self_attn.o_proj.input_scale| [1]| BF16
444_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
model.layers.0.self_attn.k_proj.weight_scale |[1]| BF16 model.layers.0.self_attn.o_proj.input_scale| [1]| BF16 model.layers.0.self_attn.o_proj.weight | [4096, 4096]| F8_E4M3 model.layers.0.self_attn.o_proj.weight_scale | [1]| BF16 model.layers.0.self_attn.q_proj.input_scale| [1]| BF16 model.layers.0.self_attn.q_proj.weight | [4096, 4096]| F8_E4M3 model.layers.0.self_attn.q_proj.weight_scale |[1] | BF16 model.layers.0.self_attn.v_proj.input_scale| [1] | BF16
444_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
model.layers.0.self_attn.q_proj.weight_scale |[1] | BF16 model.layers.0.self_attn.v_proj.input_scale| [1] | BF16 model.layers.0.self_attn.v_proj.weight |[1024, 4096]| F8_E4M3 model.layers.0.self_attn.v_proj.weight_scale |[1] |BF16
444_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
model.layers.0.self_attn.v_proj.weight_scale |[1] |BF16 When we load the model with the compressed-tensors HFQuantizer integration, we can see that all of the Linear modules that are specified within the quantization configuration have been replaced by `CompressedLinear` modules that manage the compressed weights and forward pass for inference. Note that the `lm_head` mentioned before in the ignore list is still kept as an unquantized Linear module. ```python
444_6_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
```python from transformers import AutoModelForCausalLM
444_6_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
ct_model = AutoModelForCausalLM.from_pretrained("nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf") print(ct_model) """ LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(128256, 4096) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaSdpaAttention( (q_proj): CompressedLinear( in_features=4096, out_features=4096, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (k_proj): CompressedLinear(
444_6_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
(input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (k_proj): CompressedLinear( in_features=4096, out_features=1024, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (v_proj): CompressedLinear( in_features=4096, out_features=1024, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (o_proj): CompressedLinear(
444_6_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
(input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (o_proj): CompressedLinear( in_features=4096, out_features=4096, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): CompressedLinear( in_features=4096, out_features=14336, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() )
444_6_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
(input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (up_proj): CompressedLinear( in_features=4096, out_features=14336, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (down_proj): CompressedLinear( in_features=14336, out_features=4096, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (act_fn): SiLU() )
444_6_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
(input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (act_fn): SiLU() ) (input_layernorm): LlamaRMSNorm((4096,), eps=1e-05) (post_attention_layernorm): LlamaRMSNorm((4096,), eps=1e-05) ) ) (norm): LlamaRMSNorm((4096,), eps=1e-05) (rotary_emb): LlamaRotaryEmbedding() ) (lm_head): Linear(in_features=4096, out_features=128256, bias=False) ) """ ```
444_6_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
445_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
445_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#bitnet
.md
[BitNet](https://arxiv.org/abs/2402.17764) replaces traditional Linear layers in Multi-Head Attention and Feed-Forward Networks with specialized layers called BitLinear with ternary (or binary in the older version) precision. The BitLinear layers introduced here quantize the weights using ternary precision (with values of -1, 0, and 1) and quantize the activations to 8-bit precision. <figure style="text-align: center;">
445_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#bitnet
.md
<figure style="text-align: center;"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/1.58llm_extreme_quantization/bitlinear.png" alt="Alt Text" /> <figcaption>The architecture of BitNet with BitLinear layers</figcaption> </figure>
445_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#bitnet
.md
<figcaption>The architecture of BitNet with BitLinear layers</figcaption> </figure> During training, we start by quantizing the weights into ternary values, using symmetric per tensor quantization. First, we compute the average of the absolute values of the weight matrix and use this as a scale. We then divide the weights by the scale, round the values, constrain them between -1 and 1, and finally rescale them to continue in full precision. $$ scale_w = \frac{1}{\frac{1}{nm} \sum_{ij} |W_{ij}|} $$ $$
445_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#bitnet
.md
$$ scale_w = \frac{1}{\frac{1}{nm} \sum_{ij} |W_{ij}|} $$ $$ W_q = \text{clamp}_{[-1,1]}(\text{round}(W*scale)) $$ $$ W_{dequantized} = W_q*scale_w $$ Activations are then quantized to a specified bit-width (e.g., 8-bit) using [absmax](https://arxiv.org/pdf/2208.07339) quantization (symmetric per channel quantization). This involves scaling the activations into a range [βˆ’128,127[. The quantization formula is: $$ scale_x = \frac{127}{|X|_{\text{max}, \, \text{dim}=-1}} $$ $$
445_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#bitnet
.md
$$ scale_x = \frac{127}{|X|_{\text{max}, \, \text{dim}=-1}} $$ $$ X_q = \text{clamp}_{[-128,127]}(\text{round}(X*scale)) $$ $$ X_{dequantized} = X_q * scale_x $$ To learn more about how we trained, and fine-tuned bitnet models checkout the blogpost [here](https://huggingface.co/blog/1_58_llm_extreme_quantization)
445_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#load-a-bitnet-model-from-the-hub
.md
BitNet models can't be quantized on the flyβ€”they need to be pre-trained or fine-tuned with the quantization applied (it's a Quantization aware training technique). Once trained, these models are already quantized and available as packed versions on the hub. A quantized model can be load : ```py from transformers import AutoModelForCausalLM path = "/path/to/model" model = AutoModelForCausalLM.from_pretrained(path, device_map="auto") ```
445_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#pre-training--fine-tuning-a-bitnet-model
.md
If you're looking to pre-train or fine-tune your own 1.58-bit model using Nanotron, check out this [PR](https://github.com/huggingface/nanotron/pull/180), all you need to get started is there ! For fine-tuning, you'll need to convert the model from Hugging Face format to Nanotron format (which has some differences). You can find the conversion steps in this [PR](https://github.com/huggingface/nanotron/pull/174).
445_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#kernels
.md
In our initial version, we chose to use `@torch.compile` to unpack the weights and perform the forward pass. It’s very straightforward to implement and delivers significant speed improvements. We plan to integrate additional optimized kernels in future versions.
445_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
446_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
446_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#vptq
.md
> [!TIP] > Try VPTQ on [Hugging Face](https://huggingface.co/spaces/microsoft/VPTQ)! > Try VPTQ on [Google Colab](https://colab.research.google.com/github/microsoft/VPTQ/blob/main/notebooks/vptq_example.ipynb)! > Know more about VPTQ on [ArXiv](https://arxiv.org/pdf/2409.17066)!
446_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#vptq
.md
> Know more about VPTQ on [ArXiv](https://arxiv.org/pdf/2409.17066)! Vector Post-Training Quantization ([VPTQ](https://github.com/microsoft/VPTQ)) is a novel Post-Training Quantization method that leverages Vector Quantization to high accuracy on LLMs at an extremely low bit-width (<2-bit). VPTQ can compress 70B, even the 405B model, to 1-2 bits without retraining and maintain high accuracy. - Better Accuracy on 1-2 bits, (405B @ <2bit, 70B @ 2bit)
446_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#vptq
.md
- Better Accuracy on 1-2 bits, (405B @ <2bit, 70B @ 2bit) - Lightweight Quantization Algorithm: only cost ~17 hours to quantize 405B Llama-3.1 - Agile Quantization Inference: low decode overhead, best throughput, and TTFT Inference support for VPTQ is released in the `vptq` library. Make sure to install it to run the models: ```bash pip install vptq ``` The library provides efficient kernels for NVIDIA/AMD GPU inference. To run VPTQ models simply load a model that has been quantized with VPTQ:
446_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#inference-example
.md
**Run Llama 3.1 70b on RTX4090 (24G @ ~2bits) in real time** ![Llama3 1-70b-prompt](https://github.com/user-attachments/assets/d8729aca-4e1d-4fe1-ac71-c14da4bdd97f) ```python from transformers import AutoTokenizer, AutoModelForCausalLM
446_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#inference-example
.md
quantized_model = AutoModelForCausalLM.from_pretrained( "VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-65536-woft", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-65536-woft") input_ids = tokenizer("hello, it's me", return_tensors="pt").to("cuda") out = model.generate(**input_ids, max_new_tokens=32, do_sample=False) ```
446_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#quantize-your-own-model
.md
VPTQ algorithm early-released at [VPTQ ](https://github.com/microsoft/VPTQ/tree/algorithm), and checkout the [tutorial](https://github.com/microsoft/VPTQ/blob/algorithm/algorithm.md).
446_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#early-results-from-tech-report
.md
VPTQ achieves better accuracy and higher throughput with lower quantization overhead across models of different sizes. The following experimental results are for reference only; VPTQ can achieve better outcomes under reasonable parameters, especially in terms of model accuracy and inference speed. | Model | bitwidth | W2↓ | C4↓ | AvgQA↑ | tok/s↑ | mem(GB) | cost/h↓ | | ----------- | -------- | ---- | ---- | ------ | ------ | ------- | ------- |
446_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#early-results-from-tech-report
.md
| ----------- | -------- | ---- | ---- | ------ | ------ | ------- | ------- | | LLaMA-2 7B | 2.02 | 6.13 | 8.07 | 58.2 | 39.9 | 2.28 | 2 | | | 2.26 | 5.95 | 7.87 | 59.4 | 35.7 | 2.48 | 3.1 | | LLaMA-2 13B | 2.02 | 5.32 | 7.15 | 62.4 | 26.9 | 4.03 | 3.2 | | | 2.18 | 5.28 | 7.04 | 63.1 | 18.5 | 4.31 | 3.6 | | LLaMA-2 70B | 2.07 | 3.93 | 5.72 | 68.6 | 9.7 | 19.54 | 19 |
446_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#early-results-from-tech-report
.md
| LLaMA-2 70B | 2.07 | 3.93 | 5.72 | 68.6 | 9.7 | 19.54 | 19 | | | 2.11 | 3.92 | 5.71 | 68.7 | 9.7 | 20.01 | 19 |
446_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
⚠️ The repository only provides a method of model quantization algorithm. ⚠️ The open-source community VPTQ-community provides models based on the technical report and quantization algorithm. **Quick Estimation of Model Bitwidth (Excluding Codebook Overhead)**:
446_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
**Quick Estimation of Model Bitwidth (Excluding Codebook Overhead)**: - **Model Naming Convention**: The model's name includes the **vector length** $v$, **codebook (lookup table) size**, and **residual codebook size**. For example, "Meta-Llama-3.1-70B-Instruct-v8-k65536-256-woft" is "Meta-Llama-3.1-70B-Instruct", where: - **Vector Length**: 8 - **Number of Centroids**: 65536 (2^16) - **Number of Residual Centroids**: 256 (2^8) - **Equivalent Bitwidth Calculation**:
446_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
- **Number of Centroids**: 65536 (2^16) - **Number of Residual Centroids**: 256 (2^8) - **Equivalent Bitwidth Calculation**: - **Index**: log2(65536) = 16 / 8 = 2 bits - **Residual Index**: log2(256) = 8 / 8 = 1 bit - **Total Bitwidth**: 2 + 1 = 3 bits - **Model Size Estimation**: 70B * 3 bits / 8 bits per Byte = 26.25 GB
446_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
- **Total Bitwidth**: 2 + 1 = 3 bits - **Model Size Estimation**: 70B * 3 bits / 8 bits per Byte = 26.25 GB - **Note**: This estimate does not include the size of the codebook (lookup table), other parameter overheads, and the padding overhead for storing indices. For the detailed calculation method, please refer to **Tech Report Appendix C.2**. | Model Series | Collections | (Estimated) Bit per weight |
446_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| :--------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------: |
446_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
446_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
446_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
|
446_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Llama 3.1 Nemotron 70B Instruct HF | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/vptq-llama-31-nemotron-70b-instruct-hf-without-finetune-671730b96f16208d0b3fe942) | [4 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-256-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-65536-woft)
446_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[2 bits (1)](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-65536-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-0-woft) [1.875 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-16384-woft) [1.625 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-1024-woft) [1.5
446_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[1.625 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-1024-woft) [1.5 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-256-woft) |
446_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Llama 3.1 8B Instruct | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/vptq-llama-31-8b-instruct-without-finetune-66f2b70b1d002ceedef02d2e) | [4 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-65536-woft) [3.5 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-4096-woft) [3 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-256-woft) [2.3
446_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[3 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-256-woft) [2.3 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-8B-Instruct-v12-k65536-4096-woft)
446_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
|
446_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Llama 3.1 70B Instruct | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/vptq-llama-31-70b-instruct-without-finetune-66f2bf454d3dd78dfee2ff11) | [4 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-256-woft) [2.25 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-4-woft) [2 bits
446_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[2.25 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-4-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-65536-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-0-woft) [1.93 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-32768-woft) [1.875 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k32768-0-woft) [1.75
446_5_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[1.875 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k32768-0-woft) [1.75 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k16384-0-woft) |
446_5_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Llama 3.1 405B Instruct | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/vptq-llama-31-405b-instruct-without-finetune-66f4413f9ba55e1a9e52cfb0) | [4 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k65536-256-woft) [2 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-65536-woft) [1.875
446_5_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[2 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-65536-woft) [1.875 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k32768-32768-woft) [1.625 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-1024-woft) [1.5 bits (1)](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k4096-0-woft) [1.5 bits (2)](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-256-woft) [1.43
446_5_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[1.5 bits (2)](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-256-woft) [1.43 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-128-woft) [1.375 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-64-woft) |
446_5_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Mistral Large Instruct 2407 (123B) | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/vptq-mistral-large-instruct-2407-without-finetune-6711ebfb7faf85eed9cceb16) | [4 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-256-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-65536-woft) [2 bits
446_5_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[2 bits (1)](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-65536-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-0-woft) [1.875 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-16384-woft) [1.75 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-4096-woft) [1.625 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-1024-woft) [1.5
446_5_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[1.625 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-1024-woft) [1.5 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-256-woft) |
446_5_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Qwen 2.5 7B Instruct | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/vptq-qwen-25-7b-instruct-without-finetune-66f3e9866d3167cc05ce954a) | [4 bits](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-256-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v8-k256-256-woft) [2 bits
446_5_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v8-k256-256-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-0-woft) [2 bits (3)](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v16-k65536-65536-woft)
446_5_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
|
446_5_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Qwen 2.5 14B Instruct | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/vptq-qwen-25-14b-instruct-without-finetune-66f827f83c7ffa7931b8376c) | [4 bits](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-256-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v8-k256-256-woft) [2 bits
446_5_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v8-k256-256-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-0-woft) [2 bits (3)](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v16-k65536-65536-woft)
446_5_27
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
|
446_5_28
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Qwen 2.5 32B Instruct | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/vptq-qwen-25-32b-instruct-without-finetune-66fe77173bf7d64139f0f613) | [4 bits](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-256-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v16-k65536-65536-woft) [2 bits
446_5_29
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v16-k65536-65536-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-0-woft) [2 bits (3)](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v8-k256-256-woft)
446_5_30
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
|
446_5_31
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Qwen 2.5 72B Instruct | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/vptq-qwen-25-72b-instruct-without-finetune-66f3bf1b3757dfa1ecb481c0) | [4 bits](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-256-woft) [2.38 bits](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k1024-512-woft) [2.25 bits
446_5_32
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[2.38 bits](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k1024-512-woft) [2.25 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k512-512-woft) [2.25 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-4-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-0-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v16-k65536-65536-woft) [1.94
446_5_33
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
[2 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v16-k65536-65536-woft) [1.94 bits](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v16-k65536-32768-woft) |
446_5_34
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Reproduced from the tech report | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/reproduced-vptq-tech-report-baseline-66fbf1dffe741cc9e93ecf04) | Results from the open source community for reference only, please use them responsibly.
446_5_35
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
|
446_5_36
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
| Hessian and Inverse Hessian Matrix | [HF πŸ€—](https://huggingface.co/collections/VPTQ-community/hessian-and-invhessian-checkpoints-66fd249a104850d17b23fd8b) | Collected from RedPajama-Data-1T-Sample, following [Quip#](https://github.com/Cornell-RelaxML/quip-sharp/blob/main/quantize_llama/hessian_offline_llama.py)
446_5_37
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
447_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
447_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#higgs
.md
HIGGS is a 0-shot quantization algorithm that combines Hadamard preprocessing with MSE-Optimal quantization grids to achieve lower quantization error and SOTA performance. You can find more information in the paper [arxiv.org/abs/2411.17525](https://arxiv.org/abs/2411.17525). Runtime support for HIGGS is implemented through [FLUTE](https://arxiv.org/abs/2407.10960), and its [library](https://github.com/HanGuo97/flute).
447_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#quantization-example
.md
```python from transformers import AutoModelForCausalLM, AutoTokenizer, HiggsConfig model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it", quantization_config=HiggsConfig(bits=4), device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") tokenizer.decode(model.generate( **tokenizer("Hi,", return_tensors="pt").to(model.device), temperature=0.5, top_p=0.80, )[0]) ```
447_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#pre-quantized-models
.md
Some pre-quantized models can be found in the [official collection](https://huggingface.co/collections/ISTA-DASLab/higgs-675308e432fd56b7f6dab94e) on Hugging Face Hub.
447_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#current-limitations
.md
**Architectures** Currently, FLUTE, and HIGGS by extension, **only support Llama 3 and 3.0 of 8B, 70B and 405B parameters, as well as Gemma-2 9B and 27B**. We're working on allowing to run more diverse models as well as allow arbitrary models by modifying the FLUTE compilation procedure. **torch.compile**
447_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#current-limitations
.md
**torch.compile** HIGGS is fully compatible with `torch.compile`. Compiling `model.forward`, as described [here](../perf_torch_compile.md), here're the speedups it provides on RTX 4090 for `Llama-3.1-8B-Instruct` (forward passes/sec): | Batch Size | BF16 (With `torch.compile`) | HIGGS 4bit (No `torch.compile`) | HIGGS 4bit (With `torch.compile`) | |------------|-----------------------------|----------------------------------|-----------------------------------|
447_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#current-limitations
.md
|------------|-----------------------------|----------------------------------|-----------------------------------| | 1 | 59 | 41 | 124 | | 4 | 57 | 42 | 123 | | 16 | 56 | 41 | 120 | **Quantized training**
447_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#current-limitations
.md
**Quantized training** Currently, HIGGS doesn't support quantized training (and backward passes in general). We're working on adding support for it.
447_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
448_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
448_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#aqlm
.md
> [!TIP] > Try AQLM on [Google Colab](https://colab.research.google.com/drive/1-xZmBRXT5Fm3Ghn4Mwa2KRypORXb855X?usp=sharing)! Additive Quantization of Language Models ([AQLM](https://arxiv.org/abs/2401.06118)) is a Large Language Models compression method. It quantizes multiple weights together and takes advantage of interdependencies between them. AQLM represents groups of 8-16 weights as a sum of multiple vector codes.
448_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#aqlm
.md
Inference support for AQLM is realised in the `aqlm` library. Make sure to install it to run the models (note aqlm works only with python>=3.10): ```bash pip install aqlm[gpu,cpu] ``` The library provides efficient kernels for both GPU and CPU inference and training.
448_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#aqlm
.md
```bash pip install aqlm[gpu,cpu] ``` The library provides efficient kernels for both GPU and CPU inference and training. The instructions on how to quantize models yourself, as well as all the relevant code can be found in the corresponding GitHub [repository](https://github.com/Vahe1994/AQLM). To run AQLM models simply load a model that has been quantized with AQLM: ```python from transformers import AutoTokenizer, AutoModelForCausalLM
448_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#aqlm
.md
quantized_model = AutoModelForCausalLM.from_pretrained( "ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf") ```
448_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#peft
.md
Starting with version `aqlm 1.0.2`, AQLM supports Parameter-Efficient Fine-Tuning in a form of [LoRA](https://huggingface.co/docs/peft/package_reference/lora) integrated into the [PEFT](https://huggingface.co/blog/peft) library.
448_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#aqlm-configurations
.md
AQLM quantization setups vary mainly on the number of codebooks used as well as codebook sizes in bits. The most popular setups, as well as inference kernels they support are: | Kernel | Number of codebooks | Codebook size, bits | Notation | Accuracy | Speedup | Fast GPU inference | Fast CPU inference | |---|---------------------|---------------------|----------|-------------|-------------|--------------------|--------------------|
448_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#aqlm-configurations
.md
| Triton | K | N | KxN | - | Up to ~0.7x | βœ… | ❌ | | CUDA | 1 | 16 | 1x16 | Best | Up to ~1.3x | βœ… | ❌ | | CUDA | 2 | 8 | 2x8 | OK | Up to ~3.0x | βœ… | ❌ |
448_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#aqlm-configurations
.md
| Numba | K | 8 | Kx8 | Good | Up to ~4.0x | ❌ | βœ… |
448_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
449_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
449_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#agents--tools
.md
<Tip warning={true}> Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. </Tip> To learn more about agents and tools make sure to read the [introductory guide](../transformers_agents). This page contains the API docs for the underlying classes.
449_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#agents
.md
We provide two types of agents, based on the main [`Agent`] class: - [`CodeAgent`] acts in one shot, generating code to solve the task, then executes it at once. - [`ReactAgent`] acts step by step, each step consisting of one thought, then one tool call and execution. It has two classes: - [`ReactJsonAgent`] writes its tool calls in JSON. - [`ReactCodeAgent`] writes its tool calls in Python code.
449_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#agent
.md
Agent
449_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#codeagent
.md
A class for an agent that solves the given task using a single block of code. It plans all its actions, then executes all in one shot.
449_4_0