flan-t5-base-samsum-tiny
This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.5151
- Rouge1: 47.0628
- Rouge2: 22.7811
- Rougel: 38.9719
- Rougelsum: 42.8569
- Gen Len: 17.68
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 13 | 1.5191 | 46.3183 | 22.8147 | 39.5293 | 42.4343 | 16.76 |
No log | 2.0 | 26 | 1.5157 | 46.7913 | 22.5395 | 39.2556 | 42.831 | 17.26 |
No log | 3.0 | 39 | 1.5151 | 47.0628 | 22.7811 | 38.9719 | 42.8569 | 17.68 |
No log | 4.0 | 52 | 1.5185 | 46.5019 | 22.5028 | 38.142 | 42.2845 | 17.57 |
No log | 5.0 | 65 | 1.5198 | 46.6278 | 22.5381 | 38.2493 | 42.505 | 17.59 |
Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for EdBergJr1/flan-t5-base-samsum-tiny
Base model
google/flan-t5-base