QiMeng-MuPa
Collection
QiMeng-MuPa: Mutual-Supervised Learning for Sequential-to-Parallel Code Translation
•
7 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the round4_new dataset.
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Base model
meta-llama/Meta-Llama-3-8B