Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 5 new columns ({'perplexity', 'eval_runtime', 'eval_loss', 'eval_samples_per_second', 'eval_steps_per_second'}) and 5 missing columns ({'train_steps_per_second', 'train_runtime', 'total_flos', 'train_loss', 'train_samples_per_second'}).
This happened while the json dataset builder was generating data using
hf://datasets/gonzalobenegas/gpn-animal-promoter-early-checkpoints/eval_results.json (at revision 28b39528060abe16ccea7be321bf4741f943fbdb)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
epoch: double
eval_loss: double
eval_runtime: double
eval_samples_per_second: double
eval_steps_per_second: double
perplexity: double
to
{'epoch': Value('float64'), 'total_flos': Value('float64'), 'train_loss': Value('float64'), 'train_runtime': Value('float64'), 'train_samples_per_second': Value('float64'), 'train_steps_per_second': Value('float64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1450, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 993, in stream_convert_to_parquet
builder._prepare_split(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 5 new columns ({'perplexity', 'eval_runtime', 'eval_loss', 'eval_samples_per_second', 'eval_steps_per_second'}) and 5 missing columns ({'train_steps_per_second', 'train_runtime', 'total_flos', 'train_loss', 'train_samples_per_second'}).
This happened while the json dataset builder was generating data using
hf://datasets/gonzalobenegas/gpn-animal-promoter-early-checkpoints/eval_results.json (at revision 28b39528060abe16ccea7be321bf4741f943fbdb)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
epoch
float64 | total_flos
float64 | train_loss
float64 | train_runtime
float64 | train_samples_per_second
float64 | train_steps_per_second
float64 |
|---|---|---|---|---|---|
2.1184
| 9,574,512,894,935,040,000
| 1.204642
| 11,934.6918
| 1,716.006
| 0.838
|
checkpoints
This model is a fine-tuned version of on the dataset dataset. It achieves the following results on the evaluation set:
- Loss: 1.2163
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 2048
- total_eval_batch_size: 1024
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.2665 | 0.1 | 1000 | 1.2514 |
| 1.2196 | 0.2 | 2000 | 1.2411 |
| 1.2079 | 0.3 | 3000 | 1.2353 |
| 1.2018 | 0.4 | 4000 | 1.2307 |
| 1.1977 | 1.0592 | 5000 | 1.2273 |
| 1.1949 | 1.1592 | 6000 | 1.2242 |
| 1.1925 | 1.2592 | 7000 | 1.2245 |
| 1.1906 | 1.3592 | 8000 | 1.2222 |
| 1.1883 | 2.0184 | 9000 | 1.2193 |
| 1.1868 | 2.1184 | 10000 | 1.2175 |
Framework versions
- Transformers 4.57.1
- Pytorch 2.9.1+cu128
- Datasets 4.4.1
- Tokenizers 0.22.1
- Downloads last month
- 9