apply-comments (#8)
Browse files- add comments (ded8f4c053cba9e54450cc008e1a30094e9f6fd0)
- add comments (e0252148fcf6cabc3815feec2a10c6410f0b6cf1)
- add fixes (645cc27a615cbedc051f8e7d0f6486e65bdb9beb)
- fix (b49481420ce2293a0797ab6cdb32948e2ed30704)
blog/openvino_vlm/openvino-vlm.md
CHANGED
@@ -55,7 +55,7 @@ First, you will need to convert your model to the OpenVINO IR. There are multipl
|
|
55 |
1. You can use the [Optimum CLI](https://huggingface.co/docs/optimum-intel/en/openvino/export#using-the-cli)
|
56 |
|
57 |
```bash
|
58 |
-
optimum-cli export openvino -m HuggingFaceTB/
|
59 |
```
|
60 |
|
61 |
2. Or you can convert it [on the fly](https://huggingface.co/docs/optimum-intel/en/openvino/export#when-loading-your-model) when loading your model:
|
@@ -63,7 +63,8 @@ optimum-cli export openvino -m HuggingFaceTB/SmolVLM-256M-Instruct smolvlm_ov/
|
|
63 |
|
64 |
```python
|
65 |
from optimum.intel import OVModelForVisualCausalLM
|
66 |
-
|
|
|
67 |
model = OVModelForVisualCausalLM.from_pretrained(model_id)
|
68 |
model.save_pretrained("smolvlm_ov")
|
69 |
```
|
@@ -91,7 +92,7 @@ Weight-only quantization means that only the weights are being quantized and lea
|
|
91 |
|
92 |
However, the “interactions” during the trip, like drinking water, remain unchanged. This is similar to what happens to activations, which stay in high precision (FP32 or BF16) to preserve accuracy during computation.
|
93 |
|
94 |
-
As a result, the model becomes smaller and more memory-efficient, improving loading times. But since activations are not quantized, inference speed gains are limited
|
95 |
|
96 |
Weight-only quantization is a simple first step since it usually doesn’t result in significant accuracy degradation.
|
97 |
In order to run it, you will need to create a quantization configuration using Optimum \`OVWeightQuantizationConfig\` as follows
|
@@ -99,23 +100,39 @@ In order to run it, you will need to create a quantization configuration using O
|
|
99 |
|
100 |
```python
|
101 |
from optimum.intel import OVModelForVisualCausalLM, OVWeightQuantizationConfig
|
|
|
102 |
q_config = OVWeightQuantizationConfig(bits=8)
|
103 |
# Apply quantization and save the new model
|
104 |
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
|
105 |
q_model.save_pretrained("smolvlm_int8")
|
106 |
```
|
107 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
108 |
## Option 2: Static Quantization
|
109 |
|
110 |
When applying static quantization, quantization is applied on both weights and activations. For this a calibration step is needed in which a dataset subset is used in order to estimate the activations ranges. In the following example we are using 50 samples of the [contextual dataset](https://huggingface.co/datasets/ucla-contextual/contextual_test) to perform this calibration step.
|
111 |
|
112 |
```python
|
113 |
from optimum.intel import OVModelForVisualCausalLM, OVQuantizationConfig
|
|
|
114 |
q_config = OVQuantizationConfig(bits=8, dataset="contextual", num_samples=50)
|
115 |
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
|
116 |
q_model.save_pretrained("smolvlm_static_int8")
|
117 |
```
|
118 |
|
|
|
|
|
|
|
|
|
|
|
|
|
119 |
Quantizing activations adds small errors that can build up and affect accuracy, so careful testing afterward is important. More information and examples can be found in [our documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#pipeline-quantization).
|
120 |
|
121 |
### Step 3: Run inference
|
@@ -123,13 +140,18 @@ Quantizing activations adds small errors that can build up and affect accuracy,
|
|
123 |
You can now run inference with your quantized model :
|
124 |
|
125 |
```python
|
126 |
-
# Generate outputs with quantized model
|
127 |
generated_ids = q_model.generate(**inputs, max_new_tokens=500)
|
128 |
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
129 |
print(generated_texts[0])
|
130 |
```
|
131 |
-
Try the complete notebook [here](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb).
|
132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
133 |
|
134 |
## Conclusion
|
135 |
|
|
|
55 |
1. You can use the [Optimum CLI](https://huggingface.co/docs/optimum-intel/en/openvino/export#using-the-cli)
|
56 |
|
57 |
```bash
|
58 |
+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM2-256M-Video-Instruct smolvlm_ov/
|
59 |
```
|
60 |
|
61 |
2. Or you can convert it [on the fly](https://huggingface.co/docs/optimum-intel/en/openvino/export#when-loading-your-model) when loading your model:
|
|
|
63 |
|
64 |
```python
|
65 |
from optimum.intel import OVModelForVisualCausalLM
|
66 |
+
|
67 |
+
model_id = "HuggingFaceTB/SmolVLM2-256M-Video-Instruct"
|
68 |
model = OVModelForVisualCausalLM.from_pretrained(model_id)
|
69 |
model.save_pretrained("smolvlm_ov")
|
70 |
```
|
|
|
92 |
|
93 |
However, the “interactions” during the trip, like drinking water, remain unchanged. This is similar to what happens to activations, which stay in high precision (FP32 or BF16) to preserve accuracy during computation.
|
94 |
|
95 |
+
As a result, the model becomes smaller and more memory-efficient, improving loading times. But since activations are not quantized, inference speed gains are limited. Since OpenVINO 2024.3, if the model's weight have been quantized, the corresponding activations will also be quantized at runtime, leading to additional speedup depending on the device.
|
96 |
|
97 |
Weight-only quantization is a simple first step since it usually doesn’t result in significant accuracy degradation.
|
98 |
In order to run it, you will need to create a quantization configuration using Optimum \`OVWeightQuantizationConfig\` as follows
|
|
|
100 |
|
101 |
```python
|
102 |
from optimum.intel import OVModelForVisualCausalLM, OVWeightQuantizationConfig
|
103 |
+
|
104 |
q_config = OVWeightQuantizationConfig(bits=8)
|
105 |
# Apply quantization and save the new model
|
106 |
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
|
107 |
q_model.save_pretrained("smolvlm_int8")
|
108 |
```
|
109 |
|
110 |
+
or quivalently using the CLI:
|
111 |
+
|
112 |
+
|
113 |
+
```bash
|
114 |
+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM2-256M-Video-Instruct --weight-format int8 smolvlm_int8/
|
115 |
+
|
116 |
+
```
|
117 |
+
|
118 |
## Option 2: Static Quantization
|
119 |
|
120 |
When applying static quantization, quantization is applied on both weights and activations. For this a calibration step is needed in which a dataset subset is used in order to estimate the activations ranges. In the following example we are using 50 samples of the [contextual dataset](https://huggingface.co/datasets/ucla-contextual/contextual_test) to perform this calibration step.
|
121 |
|
122 |
```python
|
123 |
from optimum.intel import OVModelForVisualCausalLM, OVQuantizationConfig
|
124 |
+
|
125 |
q_config = OVQuantizationConfig(bits=8, dataset="contextual", num_samples=50)
|
126 |
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
|
127 |
q_model.save_pretrained("smolvlm_static_int8")
|
128 |
```
|
129 |
|
130 |
+
or quivalently using the CLI:
|
131 |
+
|
132 |
+
```bash
|
133 |
+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM2-256M-Video-Instruct --quant-mode int8 --dataset contextual --num-samples 50 smolvlm_static_int8/
|
134 |
+
```
|
135 |
+
|
136 |
Quantizing activations adds small errors that can build up and affect accuracy, so careful testing afterward is important. More information and examples can be found in [our documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#pipeline-quantization).
|
137 |
|
138 |
### Step 3: Run inference
|
|
|
140 |
You can now run inference with your quantized model :
|
141 |
|
142 |
```python
|
|
|
143 |
generated_ids = q_model.generate(**inputs, max_new_tokens=500)
|
144 |
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
145 |
print(generated_texts[0])
|
146 |
```
|
|
|
147 |
|
148 |
+
If you have a recent Intel laptop, Intel AI PC, or Intel discrete GPU, you can load the model on GPU by adding `device="gpu"` when loading your model:
|
149 |
+
|
150 |
+
```python
|
151 |
+
model = OVModelForVisualCausalLM.from_pretrained(model_id, device="gpu")
|
152 |
+
```
|
153 |
+
|
154 |
+
Try the complete notebook [here](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb).
|
155 |
|
156 |
## Conclusion
|
157 |
|