hubert-base-superb-er-3kfoldfull40-finetuned-bmd-20250824_112936-LOSO-section-out1

This model is a fine-tuned version of superb/hubert-base-superb-er on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9806
  • Accuracy: 0.5417
  • F1: 0.4444

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 1968
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 1.0 3 1.0946 0.3333 0.1667
No log 2.0 6 1.0941 0.3333 0.1667
No log 3.0 9 1.0928 0.3333 0.1839
1.0958 4.0 12 1.0911 0.375 0.2778
1.0958 5.0 15 1.0893 0.4583 0.3626
1.0958 6.0 18 1.0877 0.4167 0.3375
1.0877 7.0 21 1.0861 0.5417 0.4533
1.0877 8.0 24 1.0840 0.5 0.4176
1.0877 9.0 27 1.0814 0.5 0.4176
1.0764 10.0 30 1.0781 0.5 0.4176
1.0764 11.0 33 1.0744 0.5 0.4176
1.0764 12.0 36 1.0694 0.5417 0.4539
1.0764 13.0 39 1.0638 0.5417 0.4539
1.0655 14.0 42 1.0571 0.5417 0.4539
1.0655 15.0 45 1.0501 0.5417 0.4539
1.0655 16.0 48 1.0426 0.5 0.4127
1.0437 17.0 51 1.0357 0.5 0.4127
1.0437 18.0 54 1.0292 0.5 0.4127
1.0437 19.0 57 1.0221 0.4583 0.3780
1.0197 20.0 60 1.0151 0.5 0.4176
1.0197 21.0 63 1.0088 0.5417 0.4533
1.0197 22.0 66 1.0039 0.5417 0.4533
1.0197 23.0 69 0.9989 0.5417 0.4533
0.9988 24.0 72 0.9927 0.5417 0.4533
0.9988 25.0 75 0.9884 0.5417 0.4533
0.9988 26.0 78 0.9851 0.5417 0.4533
0.9709 27.0 81 0.9839 0.5417 0.4533
0.9709 28.0 84 0.9816 0.5417 0.4444
0.9709 29.0 87 0.9806 0.5417 0.4444
0.9614 30.0 90 0.9806 0.5417 0.4444

Framework versions

  • Transformers 4.55.2
  • Pytorch 2.8.0+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.4
Downloads last month
2
Safetensors
Model size
94.6M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sarasarasara/hubert-base-superb-er-3kfoldfull40-finetuned-bmd-20250824_112936-LOSO-section-out1

Finetuned
(6)
this model