wizardII commited on
Commit
2262d61
·
verified ·
1 Parent(s): cdeda51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -14
README.md CHANGED
@@ -37,23 +37,14 @@ task_categories:
37
 
38
  ## 📖 Overview
39
 
40
- [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data) is **a dataset of verifiable, challenging, and diverse math problems (105K) and coding questions (14K)**. This dataset is used to train the **`Skywork-OR1`** (Open Reasoner 1) model series, which consists of powerful math and code reasoning models trained using large-scale rule-based reinforcement learning with carefully designed datasets and training recipes. This series includes two general-purpose reasoning modelsl, **`Skywork-OR1-7B-Preview`** and **`Skywork-OR1-32B-Preview`**, along with a math-specialized model, **`Skywork-OR1-Math-7B`**.
41
 
42
- - **[`Skywork-OR1-Math-7B`](https://huggingface.co/Skywork/Skywork-OR1-Math-7B)** is specifically optimized for mathematical reasoning, scoring **69.8** on AIME24 and **52.3** on AIME25 well ahead of all models of similar size.
43
- - **[`Skywork-OR1-32B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-32B-Preview)** delivers the 671B-parameter Deepseek-R1 performance on math tasks (AIME24 and AIME25) and coding tasks (LiveCodeBench).
44
- - **[`Skywork-OR1-7B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-7B-Preview)** outperforms all similarly sized models in both math and coding scenarios.
45
 
46
- We select, clean, and curate math and coding problems from open-source datasets, including
 
 
47
 
48
- - [NuminaMath-1.5](https://huggingface.co/datasets/AI-MO/NuminaMath-1.5)
49
- - [DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset)
50
- - [STILL-3-Preview-RL-Data](https://huggingface.co/datasets/RUC-AIBOX/STILL-3-Preview-RL-Data)
51
- - [Omni-Math](https://huggingface.co/datasets/KbsdJames/Omni-MATH)
52
- - [AIME problems prior to 2024](https://huggingface.co/datasets/gneubig/aime-1983-2024)
53
- - [LeetCodeDataset](https://huggingface.co/datasets/newfacade/LeetCodeDataset)
54
- - [TACO](https://huggingface.co/datasets/BAAI/TACO)
55
-
56
- We conduct **model-aware difficulty estimation** for each problem and model and conduct **rigorous quality assessment prior to training** via both human and LLM-as-a-Judge to ensure training efficiency and effectiveness. We also perform deduplication within the dataset and remove similar problems from AIME 24, AIME 25, and LiveCodeBench to prevent data contamination.
57
 
58
  ## Technical Report
59
 
 
37
 
38
  ## 📖 Overview
39
 
40
+ [`ArcherCodeR-Dataset`](https://huggingface.co/datasets/wizardII/ArcherCodeR-Dataset) is **a dataset of verifiable, challenging, and diverse coding questions (6K)**. This dataset is used to train the **`ArcherCodeR`** model series, which consists of code reasoning models trained using large-scale rule-based reinforcement learning with carefully designed datasets and training recipes.
41
 
42
+ We select, clean, and curate coding problems from open-source datasets, including
 
 
43
 
44
+ - [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset)
45
+ - [deepmind/code_contests](https://huggingface.co/datasets/deepmind/code_contests)
46
+ - [open-r1/codeforces](https://huggingface.co/datasets/open-r1/codeforces)
47
 
 
 
 
 
 
 
 
 
 
48
 
49
  ## Technical Report
50