add fixes
Browse files
blog/openvino_vlm/openvino-vlm.md
CHANGED
@@ -54,7 +54,7 @@ First, you will need to convert your model to the OpenVINO IR. There are multipl
|
|
54 |
1. You can use the [Optimum CLI](https://huggingface.co/docs/optimum-intel/en/openvino/export#using-the-cli)
|
55 |
|
56 |
```bash
|
57 |
-
optimum-cli export openvino -m HuggingFaceTB/
|
58 |
```
|
59 |
|
60 |
2. Or you can convert it [on the fly](https://huggingface.co/docs/optimum-intel/en/openvino/export#when-loading-your-model) when loading your model:
|
@@ -62,7 +62,8 @@ optimum-cli export openvino -m HuggingFaceTB/SmolVLM-256M-Instruct smolvlm_ov/
|
|
62 |
|
63 |
```python
|
64 |
from optimum.intel import OVModelForVisualCausalLM
|
65 |
-
|
|
|
66 |
model = OVModelForVisualCausalLM.from_pretrained(model_id)
|
67 |
model.save_pretrained("smolvlm_ov")
|
68 |
```
|
@@ -90,7 +91,7 @@ Weight-only quantization means that only the weights are being quantized and lea
|
|
90 |
|
91 |
However, the “interactions” during the trip, like drinking water, remain unchanged. This is similar to what happens to activations, which stay in high precision (FP32 or BF16) to preserve accuracy during computation.
|
92 |
|
93 |
-
As a result, the model becomes smaller and more memory-efficient, improving loading times. But since activations are not quantized, inference speed gains are limited. Since OpenVINO 2024.3, if the model's weight have been quantized, the corresponding activations will also be quantized at runtime, leading to additional speedup.
|
94 |
|
95 |
Weight-only quantization is a simple first step since it usually doesn’t result in significant accuracy degradation.
|
96 |
In order to run it, you will need to create a quantization configuration using Optimum \`OVWeightQuantizationConfig\` as follows
|
@@ -109,7 +110,7 @@ or quivalently using the CLI:
|
|
109 |
|
110 |
|
111 |
```bash
|
112 |
-
optimum-cli export openvino -m HuggingFaceTB/
|
113 |
|
114 |
```
|
115 |
|
@@ -128,7 +129,7 @@ q_model.save_pretrained("smolvlm_static_int8")
|
|
128 |
or quivalently using the CLI:
|
129 |
|
130 |
```bash
|
131 |
-
optimum-cli export openvino -m HuggingFaceTB/
|
132 |
```
|
133 |
|
134 |
Quantizing activations adds small errors that can build up and affect accuracy, so careful testing afterward is important. More information and examples can be found in [our documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#pipeline-quantization).
|
|
|
54 |
1. You can use the [Optimum CLI](https://huggingface.co/docs/optimum-intel/en/openvino/export#using-the-cli)
|
55 |
|
56 |
```bash
|
57 |
+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM2-256M-Video-Instruct smolvlm_ov/
|
58 |
```
|
59 |
|
60 |
2. Or you can convert it [on the fly](https://huggingface.co/docs/optimum-intel/en/openvino/export#when-loading-your-model) when loading your model:
|
|
|
62 |
|
63 |
```python
|
64 |
from optimum.intel import OVModelForVisualCausalLM
|
65 |
+
|
66 |
+
model_id = "HuggingFaceTB/SmolVLM2-256M-Video-Instruct"
|
67 |
model = OVModelForVisualCausalLM.from_pretrained(model_id)
|
68 |
model.save_pretrained("smolvlm_ov")
|
69 |
```
|
|
|
91 |
|
92 |
However, the “interactions” during the trip, like drinking water, remain unchanged. This is similar to what happens to activations, which stay in high precision (FP32 or BF16) to preserve accuracy during computation.
|
93 |
|
94 |
+
As a result, the model becomes smaller and more memory-efficient, improving loading times. But since activations are not quantized, inference speed gains are limited. Since OpenVINO 2024.3, if the model's weight have been quantized, the corresponding activations will also be quantized at runtime, leading to additional speedup depending on the device.
|
95 |
|
96 |
Weight-only quantization is a simple first step since it usually doesn’t result in significant accuracy degradation.
|
97 |
In order to run it, you will need to create a quantization configuration using Optimum \`OVWeightQuantizationConfig\` as follows
|
|
|
110 |
|
111 |
|
112 |
```bash
|
113 |
+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM2-256M-Video-Instruct --weight-format int8 smolvlm_int8/
|
114 |
|
115 |
```
|
116 |
|
|
|
129 |
or quivalently using the CLI:
|
130 |
|
131 |
```bash
|
132 |
+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM2-256M-Video-Instruct --weight-format int8 --dataset contextual --num-samples 50 smolvlm_static_int8/
|
133 |
```
|
134 |
|
135 |
Quantizing activations adds small errors that can build up and affect accuracy, so careful testing afterward is important. More information and examples can be found in [our documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#pipeline-quantization).
|