lemexp-task1-v3-lemma_object_full_nodefs-Llama-3.2-1B-8lr-12epochs-no-eos
This model is a fine-tuned version of meta-llama/Llama-3.2-1B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.3427
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.7585 | 0.2000 | 3114 | 0.7421 |
| 0.7122 | 0.4000 | 6228 | 0.6870 |
| 0.6941 | 0.6000 | 9342 | 0.6607 |
| 0.6676 | 0.8001 | 12456 | 0.6567 |
| 0.6606 | 1.0001 | 15570 | 0.6461 |
| 0.6432 | 1.2001 | 18684 | 0.6161 |
| 0.6363 | 1.4001 | 21798 | 0.6119 |
| 0.6309 | 1.6001 | 24912 | 0.6047 |
| 0.6205 | 1.8001 | 28026 | 0.6043 |
| 0.6092 | 2.0001 | 31140 | 0.5896 |
| 0.6014 | 2.2001 | 34254 | 0.5791 |
| 0.6043 | 2.4002 | 37368 | 0.5783 |
| 0.6006 | 2.6002 | 40482 | 0.5677 |
| 0.5889 | 2.8002 | 43596 | 0.5575 |
| 0.582 | 3.0002 | 46710 | 0.5521 |
| 0.5716 | 3.2002 | 49824 | 0.5594 |
| 0.5605 | 3.4002 | 52938 | 0.5349 |
| 0.5596 | 3.6002 | 56052 | 0.5366 |
| 0.5495 | 3.8002 | 59166 | 0.5353 |
| 0.5536 | 4.0003 | 62280 | 0.5334 |
| 0.5413 | 4.2003 | 65394 | 0.5245 |
| 0.5334 | 4.4003 | 68508 | 0.5178 |
| 0.5331 | 4.6003 | 71622 | 0.5099 |
| 0.5274 | 4.8003 | 74736 | 0.5002 |
| 0.528 | 5.0003 | 77850 | 0.5020 |
| 0.5167 | 5.2003 | 80964 | 0.4962 |
| 0.5097 | 5.4003 | 84078 | 0.4961 |
| 0.5086 | 5.6004 | 87192 | 0.4865 |
| 0.4952 | 5.8004 | 90306 | 0.4789 |
| 0.4965 | 6.0004 | 93420 | 0.4738 |
| 0.4874 | 6.2004 | 96534 | 0.4723 |
| 0.4849 | 6.4004 | 99648 | 0.4666 |
| 0.4795 | 6.6004 | 102762 | 0.4542 |
| 0.478 | 6.8004 | 105876 | 0.4540 |
| 0.4746 | 7.0004 | 108990 | 0.4480 |
| 0.4625 | 7.2005 | 112104 | 0.4473 |
| 0.4533 | 7.4005 | 115218 | 0.4365 |
| 0.4568 | 7.6005 | 118332 | 0.4330 |
| 0.4495 | 7.8005 | 121446 | 0.4296 |
| 0.4406 | 8.0005 | 124560 | 0.4214 |
| 0.4345 | 8.2005 | 127674 | 0.4184 |
| 0.4321 | 8.4005 | 130788 | 0.4127 |
| 0.4266 | 8.6006 | 133902 | 0.4089 |
| 0.4301 | 8.8006 | 137016 | 0.4061 |
| 0.415 | 9.0006 | 140130 | 0.3990 |
| 0.4082 | 9.2006 | 143244 | 0.3969 |
| 0.4041 | 9.4006 | 146358 | 0.3900 |
| 0.399 | 9.6006 | 149472 | 0.3844 |
| 0.3989 | 9.8006 | 152586 | 0.3850 |
| 0.3898 | 10.0006 | 155700 | 0.3796 |
| 0.3819 | 10.2007 | 158814 | 0.3738 |
| 0.3795 | 10.4007 | 161928 | 0.3698 |
| 0.3746 | 10.6007 | 165042 | 0.3594 |
| 0.3668 | 10.8007 | 168156 | 0.3611 |
| 0.3671 | 11.0007 | 171270 | 0.3562 |
| 0.3641 | 11.2007 | 174384 | 0.3539 |
| 0.3539 | 11.4007 | 177498 | 0.3502 |
| 0.3461 | 11.6007 | 180612 | 0.3451 |
| 0.35 | 11.8008 | 183726 | 0.3427 |
Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 4.2.0
- Tokenizers 0.21.0
- Downloads last month
- 425
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for yalhessi/lemexp-task1-v3-lemma_object_full_nodefs-Llama-3.2-1B-8lr-12epochs-no-eos
Base model
meta-llama/Llama-3.2-1B