test_envit5_finetune
This model is a fine-tuned version of VietAI/envit5-translation on the None dataset. It achieves the following results on the evaluation set:
- Loss: 33.9330
- Bleu: 12.1272
- Gen Len: 18.518
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
---|---|---|---|---|---|
No log | 1.0 | 63 | 49.2942 | 12.4745 | 18.558 |
No log | 2.0 | 126 | 44.9845 | 12.6848 | 18.56 |
No log | 3.0 | 189 | 40.2016 | 12.2549 | 18.539 |
No log | 4.0 | 252 | 37.2002 | 12.1574 | 18.542 |
No log | 5.0 | 315 | 35.3253 | 12.1068 | 18.531 |
No log | 6.0 | 378 | 34.2646 | 12.1376 | 18.518 |
No log | 7.0 | 441 | 33.9330 | 12.1272 | 18.518 |
Framework versions
- PEFT 0.14.0
- Transformers 4.44.2
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ducmai-4203/test_envit5_finetune
Base model
VietAI/envit5-translation