Commit
·
d63e57b
1
Parent(s):
9275f61
Enable torch autocast for albert xxlarge. (#1)
Browse files- Enable torch autocast for albert large. (adc2b679b8ae948c8d504d51d7e2a40f5bb6ea9d)
- Remove mentions of habana mixed precision (c940e1a7c4a993f3fd1208c6c84946b8552380ae)
Co-authored-by: Shiv Kaul <skaulintel@users.noreply.huggingface.co>
- README.md +2 -6
- gaudi_config.json +2 -25
README.md
CHANGED
|
@@ -13,11 +13,7 @@ This model only contains the `GaudiConfig` file for running the [albert-xxlarge-
|
|
| 13 |
**This model contains no model weights, only a GaudiConfig.**
|
| 14 |
|
| 15 |
This enables to specify:
|
| 16 |
-
- `
|
| 17 |
-
- `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html#configuration-options) for a detailed explanation
|
| 18 |
-
- `hmp_bf16_ops`: list of operators that should run in bf16
|
| 19 |
-
- `hmp_fp32_ops`: list of operators that should run in fp32
|
| 20 |
-
- `hmp_is_verbose`: verbosity
|
| 21 |
- `use_fused_adam`: whether to use Habana's custom AdamW implementation
|
| 22 |
- `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
|
| 23 |
|
|
@@ -45,4 +41,4 @@ python run_qa.py \
|
|
| 45 |
--throughput_warmup_steps 2
|
| 46 |
```
|
| 47 |
|
| 48 |
-
Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
|
|
|
|
| 13 |
**This model contains no model weights, only a GaudiConfig.**
|
| 14 |
|
| 15 |
This enables to specify:
|
| 16 |
+
- `use_torch_autocast`: whether to use PyTorch's autocast mixed precision
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
- `use_fused_adam`: whether to use Habana's custom AdamW implementation
|
| 18 |
- `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
|
| 19 |
|
|
|
|
| 41 |
--throughput_warmup_steps 2
|
| 42 |
```
|
| 43 |
|
| 44 |
+
Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
|
gaudi_config.json
CHANGED
|
@@ -1,28 +1,5 @@
|
|
| 1 |
{
|
| 2 |
-
"
|
| 3 |
-
"hmp_is_verbose": false,
|
| 4 |
"use_fused_adam": true,
|
| 5 |
-
"use_fused_clip_norm": true
|
| 6 |
-
"hmp_bf16_ops": [
|
| 7 |
-
"add",
|
| 8 |
-
"addmm",
|
| 9 |
-
"bmm",
|
| 10 |
-
"div",
|
| 11 |
-
"dropout",
|
| 12 |
-
"gelu",
|
| 13 |
-
"iadd",
|
| 14 |
-
"linear",
|
| 15 |
-
"layer_norm",
|
| 16 |
-
"matmul",
|
| 17 |
-
"mm",
|
| 18 |
-
"rsub",
|
| 19 |
-
"softmax",
|
| 20 |
-
"truediv"
|
| 21 |
-
],
|
| 22 |
-
"hmp_fp32_ops": [
|
| 23 |
-
"embedding",
|
| 24 |
-
"nll_loss",
|
| 25 |
-
"log_softmax",
|
| 26 |
-
"cross_entropy"
|
| 27 |
-
]
|
| 28 |
}
|
|
|
|
| 1 |
{
|
| 2 |
+
"use_torch_autocast": true,
|
|
|
|
| 3 |
"use_fused_adam": true,
|
| 4 |
+
"use_fused_clip_norm": true
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
}
|