Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
BigDong commited on
Commit
6bdb425
·
1 Parent(s): aec0d15

add Ultra-FineWeb lighteval task python file

Browse files
evaluation/lighteval_tasks_ultrafineweb.py ADDED
@@ -0,0 +1,645 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ruff: noqa: F405, F403, F401
2
+ """
3
+ Custom evaluation tasks for lighteval
4
+ Do note that we ran the evals with `max_samples=1000` to speed up large evals.
5
+ Most custom prompt changes were in an attempt to improve signal for small models in general.
6
+ This file generally creates just a TASKS_TABLE and TASKS_GROUPS which are then imported by LightEval.
7
+ Example usage (lighteval_tasks.py is the path to this file):
8
+ ===================
9
+ accelerate launch --num_processes=1 lighteval/run_evals_accelerate.py --model_args="pretrained=HuggingFaceFW/ablation-model-fineweb-edu" \
10
+ --custom_tasks "lighteval_tasks.py" --output_dir [OUTPUTPATH] --max_samples 1000 \
11
+ --tasks "custom|hellaswag|0|1,custom|winogrande|0|1,custom|piqa|0|1,custom|siqa|0|1,custom|openbookqa|0|1,custom|arc:easy|0|1,custom|arc:challenge|0|1,custom|commonsense_qa|0|1,custom|mmlu:abstract_algebra|0|1,custom|mmlu:anatomy|0|1,custom|mmlu:astronomy|0|1,custom|mmlu:business_ethics|0|1,custom|mmlu:clinical_knowledge|0|1,custom|mmlu:college_biology|0|1,custom|mmlu:college_chemistry|0|1,custom|mmlu:college_computer_science|0|1,custom|mmlu:college_mathematics|0|1,custom|mmlu:college_medicine|0|1,custom|mmlu:college_physics|0|1,custom|mmlu:computer_security|0|1,custom|mmlu:conceptual_physics|0|1,custom|mmlu:econometrics|0|1,custom|mmlu:electrical_engineering|0|1,custom|mmlu:elementary_mathematics|0|1,custom|mmlu:formal_logic|0|1,custom|mmlu:global_facts|0|1,custom|mmlu:high_school_biology|0|1,custom|mmlu:high_school_chemistry|0|1,custom|mmlu:high_school_computer_science|0|1,custom|mmlu:high_school_european_history|0|1,custom|mmlu:high_school_geography|0|1,custom|mmlu:high_school_government_and_politics|0|1,custom|mmlu:high_school_macroeconomics|0|1,custom|mmlu:high_school_mathematics|0|1,custom|mmlu:high_school_microeconomics|0|1,custom|mmlu:high_school_physics|0|1,custom|mmlu:high_school_psychology|0|1,custom|mmlu:high_school_statistics|0|1,custom|mmlu:high_school_us_history|0|1,custom|mmlu:high_school_world_history|0|1,custom|mmlu:human_aging|0|1,custom|mmlu:human_sexuality|0|1,custom|mmlu:international_law|0|1,custom|mmlu:jurisprudence|0|1,custom|mmlu:logical_fallacies|0|1,custom|mmlu:machine_learning|0|1,custom|mmlu:management|0|1,custom|mmlu:marketing|0|1,custom|mmlu:medical_genetics|0|1,custom|mmlu:miscellaneous|0|1,custom|mmlu:moral_disputes|0|1,custom|mmlu:moral_scenarios|0|1,custom|mmlu:nutrition|0|1,custom|mmlu:philosophy|0|1,custom|mmlu:prehistory|0|1,custom|mmlu:professional_accounting|0|1,custom|mmlu:professional_law|0|1,custom|mmlu:professional_medicine|0|1,custom|mmlu:professional_psychology|0|1,custom|mmlu:public_relations|0|1,custom|mmlu:security_studies|0|1,custom|mmlu:sociology|0|1,custom|mmlu:us_foreign_policy|0|1,custom|mmlu:virology|0|1,custom|mmlu:world_religions|0|1"
12
+ ===================
13
+ More info here: https://github.com/huggingface/lighteval?tab=readme-ov-file#evaluate-a-model-on-extended-community-or-custom-tasks
14
+ For more info on differences between MMLU implementations: https://huggingface.co/blog/open-llm-leaderboard-mmlu#1001-flavors-of-mmlu
15
+ In particular, the default leaderboard MMLU implementation (which uses "A", "B", etc as answer targets) gives generally random results on small/non instruction tuned models.
16
+ Instead, we use the full MMLU answer as the target.
17
+ """
18
+ import re
19
+ from typing import List, Tuple
20
+
21
+ from lighteval.metrics import Metrics
22
+ from lighteval.tasks.lighteval_task import LightevalTaskConfig
23
+ from lighteval.tasks.requests import Doc
24
+ from lighteval.tasks.tasks_prompt_formatting import LETTER_INDICES
25
+
26
+ _TASKS_STRINGS: List[Tuple[LightevalTaskConfig, str]] = []
27
+ _TASKS: List[LightevalTaskConfig] = []
28
+
29
+ ## COMMON_SENSE_REASONING_TASKS ##
30
+ COMMON_SENSE_REASONING_TASKS = [
31
+ LightevalTaskConfig(
32
+ name="hellaswag",
33
+ prompt_function="hellaswag_prompt",
34
+ hf_repo="hellaswag",
35
+ hf_subset="default",
36
+ metric=["loglikelihood_acc", "loglikelihood_acc_norm_nospace"],
37
+ ),
38
+ LightevalTaskConfig(
39
+ name="winogrande",
40
+ prompt_function="winogrande",
41
+ hf_repo="winogrande",
42
+ hf_subset="winogrande_xl",
43
+ metric=["loglikelihood_acc", "loglikelihood_acc_norm_nospace"],
44
+ ),
45
+ LightevalTaskConfig(
46
+ name="piqa",
47
+ prompt_function="piqa_harness",
48
+ hf_repo="piqa",
49
+ hf_subset="plain_text",
50
+ metric=["loglikelihood_acc", "loglikelihood_acc_norm_nospace"],
51
+ ),
52
+ LightevalTaskConfig(
53
+ name="siqa",
54
+ prompt_function="siqa_prompt",
55
+ hf_repo="siqa",
56
+ hf_subset="default",
57
+ hf_avail_splits=["train", "validation"],
58
+ metric=["loglikelihood_acc", "loglikelihood_acc_norm_nospace"],
59
+ ),
60
+ LightevalTaskConfig(
61
+ name="openbookqa",
62
+ prompt_function="openbookqa",
63
+ hf_repo="openbookqa",
64
+ hf_subset="main",
65
+ metric=["loglikelihood_acc", "loglikelihood_acc_norm_nospace"],
66
+ ),
67
+ LightevalTaskConfig(
68
+ name="arc:easy",
69
+ prompt_function="arc",
70
+ hf_repo="ai2_arc",
71
+ hf_subset="ARC-Easy",
72
+ evaluation_splits=["test"],
73
+ generation_size=1,
74
+ metric=["loglikelihood_acc", "loglikelihood_acc_norm_nospace"],
75
+ ),
76
+ LightevalTaskConfig(
77
+ name="arc:challenge",
78
+ prompt_function="arc",
79
+ hf_repo="ai2_arc",
80
+ hf_subset="ARC-Challenge",
81
+ evaluation_splits=["test"],
82
+ generation_size=1,
83
+ metric=["loglikelihood_acc", "loglikelihood_acc_norm_nospace"],
84
+ ),
85
+ LightevalTaskConfig(
86
+ name="commonsense_qa",
87
+ prompt_function="commonsense_qa_prompt",
88
+ hf_repo="commonsense_qa",
89
+ hf_subset="default",
90
+ metric=["loglikelihood_acc", "loglikelihood_acc_norm_nospace"],
91
+ ),
92
+ ]
93
+
94
+
95
+ def commonsense_qa_prompt(line, task_name: str = None):
96
+ return Doc(
97
+ task_name=task_name,
98
+ query=line["question"],
99
+ choices=[f" {c}" for c in line["choices"]["text"]],
100
+ gold_index=LETTER_INDICES.index(line["answerKey"].strip()),
101
+ instruction="",
102
+ )
103
+
104
+
105
+ def siqa_prompt(line, task_name: str = None):
106
+ return Doc(
107
+ task_name=task_name,
108
+ query=line["context"] + " " + line["question"],
109
+ choices=[f" {c}" for c in [line["answerA"], line["answerB"], line["answerC"]]],
110
+ gold_index=int(line["label"]) - 1,
111
+ instruction="",
112
+ )
113
+
114
+
115
+ def hellaswag_prompt(line, task_name: str = None):
116
+ def preprocess(text):
117
+ """Comes from AiHarness"""
118
+ # text = text.strip()
119
+ # NOTE: Brackets are artifacts of the WikiHow dataset portion of HellaSwag.
120
+ text = text.replace(" [title]", ". ")
121
+ text = re.sub("\\[.*?\\]", "", text)
122
+ text = text.replace(" ", " ")
123
+ return text
124
+
125
+ ctx = f"{line['ctx_a']} {line['ctx_b'].capitalize()} "
126
+ return Doc(
127
+ task_name=task_name,
128
+ query=preprocess(line["activity_label"] + ": " + ctx),
129
+ choices=[" " + preprocess(ending) for ending in line["endings"]],
130
+ gold_index=int(line["label"]) if line["label"] != "" else -1, # -1 for test
131
+ # "metric": "choices_loglikelihood",
132
+ )
133
+
134
+
135
+ # 0 short for common sense
136
+ COMMON_SENSE_REASONING_STRING = [(t, f"custom|{t.name}|0|1") for t in COMMON_SENSE_REASONING_TASKS]
137
+ _TASKS_STRINGS.extend(COMMON_SENSE_REASONING_STRING)
138
+ _TASKS += COMMON_SENSE_REASONING_TASKS
139
+
140
+ ## MMLU ##
141
+ class CustomMMLUEvaluationTask(LightevalTaskConfig):
142
+ def __init__(
143
+ self,
144
+ name,
145
+ prompt_function="mmlu_prompt",
146
+ hf_repo="mmlu",
147
+ hf_subset=None,
148
+ # metric=[Metrics.loglikelihood_acc_single_token],
149
+ metric=[Metrics.loglikelihood_acc, Metrics.loglikelihood_acc_norm_nospace],
150
+ hf_avail_splits=None,
151
+ evaluation_splits=["test"],
152
+ few_shots_split="validation:",
153
+ few_shots_select=None,
154
+ suite=None,
155
+ generation_size=-1,
156
+ stop_sequence=None,
157
+ output_regex=None,
158
+ frozen=False,
159
+ ):
160
+ super().__init__(
161
+ name=name,
162
+ prompt_function=prompt_function,
163
+ hf_repo=hf_repo,
164
+ hf_subset=hf_subset,
165
+ metric=metric,
166
+ hf_avail_splits=hf_avail_splits,
167
+ evaluation_splits=evaluation_splits,
168
+ few_shots_split=few_shots_split,
169
+ few_shots_select=few_shots_select,
170
+ suite=suite,
171
+ generation_size=generation_size,
172
+ stop_sequence=stop_sequence,
173
+ output_regex=output_regex,
174
+ frozen=frozen,
175
+ )
176
+
177
+
178
+ MMLU_TASKS = [
179
+ CustomMMLUEvaluationTask(name="mmlu:abstract_algebra", hf_subset="abstract_algebra"),
180
+ CustomMMLUEvaluationTask(name="mmlu:anatomy", hf_subset="anatomy"),
181
+ CustomMMLUEvaluationTask(name="mmlu:astronomy", hf_subset="astronomy"),
182
+ CustomMMLUEvaluationTask(name="mmlu:business_ethics", hf_subset="business_ethics"),
183
+ CustomMMLUEvaluationTask(name="mmlu:clinical_knowledge", hf_subset="clinical_knowledge"),
184
+ CustomMMLUEvaluationTask(name="mmlu:college_biology", hf_subset="college_biology"),
185
+ CustomMMLUEvaluationTask(name="mmlu:college_chemistry", hf_subset="college_chemistry"),
186
+ CustomMMLUEvaluationTask(name="mmlu:college_computer_science", hf_subset="college_computer_science"),
187
+ CustomMMLUEvaluationTask(name="mmlu:college_mathematics", hf_subset="college_mathematics"),
188
+ CustomMMLUEvaluationTask(name="mmlu:college_medicine", hf_subset="college_medicine"),
189
+ CustomMMLUEvaluationTask(name="mmlu:college_physics", hf_subset="college_physics"),
190
+ CustomMMLUEvaluationTask(name="mmlu:computer_security", hf_subset="computer_security"),
191
+ CustomMMLUEvaluationTask(name="mmlu:conceptual_physics", hf_subset="conceptual_physics"),
192
+ CustomMMLUEvaluationTask(name="mmlu:econometrics", hf_subset="econometrics"),
193
+ CustomMMLUEvaluationTask(name="mmlu:electrical_engineering", hf_subset="electrical_engineering"),
194
+ CustomMMLUEvaluationTask(name="mmlu:elementary_mathematics", hf_subset="elementary_mathematics"),
195
+ CustomMMLUEvaluationTask(name="mmlu:formal_logic", hf_subset="formal_logic"),
196
+ CustomMMLUEvaluationTask(name="mmlu:global_facts", hf_subset="global_facts"),
197
+ CustomMMLUEvaluationTask(name="mmlu:high_school_biology", hf_subset="high_school_biology"),
198
+ CustomMMLUEvaluationTask(name="mmlu:high_school_chemistry", hf_subset="high_school_chemistry"),
199
+ CustomMMLUEvaluationTask(name="mmlu:high_school_computer_science", hf_subset="high_school_computer_science"),
200
+ CustomMMLUEvaluationTask(name="mmlu:high_school_european_history", hf_subset="high_school_european_history"),
201
+ CustomMMLUEvaluationTask(name="mmlu:high_school_geography", hf_subset="high_school_geography"),
202
+ CustomMMLUEvaluationTask(
203
+ name="mmlu:high_school_government_and_politics", hf_subset="high_school_government_and_politics"
204
+ ),
205
+ CustomMMLUEvaluationTask(name="mmlu:high_school_macroeconomics", hf_subset="high_school_macroeconomics"),
206
+ CustomMMLUEvaluationTask(name="mmlu:high_school_mathematics", hf_subset="high_school_mathematics"),
207
+ CustomMMLUEvaluationTask(name="mmlu:high_school_microeconomics", hf_subset="high_school_microeconomics"),
208
+ CustomMMLUEvaluationTask(name="mmlu:high_school_physics", hf_subset="high_school_physics"),
209
+ CustomMMLUEvaluationTask(name="mmlu:high_school_psychology", hf_subset="high_school_psychology"),
210
+ CustomMMLUEvaluationTask(name="mmlu:high_school_statistics", hf_subset="high_school_statistics"),
211
+ CustomMMLUEvaluationTask(name="mmlu:high_school_us_history", hf_subset="high_school_us_history"),
212
+ CustomMMLUEvaluationTask(name="mmlu:high_school_world_history", hf_subset="high_school_world_history"),
213
+ CustomMMLUEvaluationTask(name="mmlu:human_aging", hf_subset="human_aging"),
214
+ CustomMMLUEvaluationTask(name="mmlu:human_sexuality", hf_subset="human_sexuality"),
215
+ CustomMMLUEvaluationTask(name="mmlu:international_law", hf_subset="international_law"),
216
+ CustomMMLUEvaluationTask(name="mmlu:jurisprudence", hf_subset="jurisprudence"),
217
+ CustomMMLUEvaluationTask(name="mmlu:logical_fallacies", hf_subset="logical_fallacies"),
218
+ CustomMMLUEvaluationTask(name="mmlu:machine_learning", hf_subset="machine_learning"),
219
+ CustomMMLUEvaluationTask(name="mmlu:management", hf_subset="management"),
220
+ CustomMMLUEvaluationTask(name="mmlu:marketing", hf_subset="marketing"),
221
+ CustomMMLUEvaluationTask(name="mmlu:medical_genetics", hf_subset="medical_genetics"),
222
+ CustomMMLUEvaluationTask(name="mmlu:miscellaneous", hf_subset="miscellaneous"),
223
+ CustomMMLUEvaluationTask(name="mmlu:moral_disputes", hf_subset="moral_disputes"),
224
+ CustomMMLUEvaluationTask(name="mmlu:moral_scenarios", hf_subset="moral_scenarios"),
225
+ CustomMMLUEvaluationTask(name="mmlu:nutrition", hf_subset="nutrition"),
226
+ CustomMMLUEvaluationTask(name="mmlu:philosophy", hf_subset="philosophy"),
227
+ CustomMMLUEvaluationTask(name="mmlu:prehistory", hf_subset="prehistory"),
228
+ CustomMMLUEvaluationTask(name="mmlu:professional_accounting", hf_subset="professional_accounting"),
229
+ CustomMMLUEvaluationTask(name="mmlu:professional_law", hf_subset="professional_law"),
230
+ CustomMMLUEvaluationTask(name="mmlu:professional_medicine", hf_subset="professional_medicine"),
231
+ CustomMMLUEvaluationTask(name="mmlu:professional_psychology", hf_subset="professional_psychology"),
232
+ CustomMMLUEvaluationTask(name="mmlu:public_relations", hf_subset="public_relations"),
233
+ CustomMMLUEvaluationTask(name="mmlu:security_studies", hf_subset="security_studies"),
234
+ CustomMMLUEvaluationTask(name="mmlu:sociology", hf_subset="sociology"),
235
+ CustomMMLUEvaluationTask(name="mmlu:us_foreign_policy", hf_subset="us_foreign_policy"),
236
+ CustomMMLUEvaluationTask(name="mmlu:virology", hf_subset="virology"),
237
+ CustomMMLUEvaluationTask(name="mmlu:world_religions", hf_subset="world_religions"),
238
+ ]
239
+
240
+
241
+ def mmlu_prompt(line, task_name: str = None):
242
+ """MMLU prompt without letters"""
243
+ topic = line["subject"]
244
+ prompt = f"The following are questions about {topic.replace('_', ' ')}.\nQuestion: "
245
+ prompt += line["question"] + "\nAnswer:"
246
+
247
+ return Doc(
248
+ task_name=task_name,
249
+ query=prompt,
250
+ choices=[f" {c}" for c in line["choices"]],
251
+ gold_index=line["answer"],
252
+ instruction=f"The following are questions about {topic.replace('_', ' ')}.\n",
253
+ )
254
+
255
+
256
+ MMLU_STRING = [(t, f"custom|{t.name}|0|1") for t in MMLU_TASKS]
257
+ _TASKS_STRINGS.extend(MMLU_STRING)
258
+ _TASKS += MMLU_TASKS
259
+
260
+ # common sense reasoning + mmlu
261
+ EARLY_SIGNAL_TASKS = ",".join([t[1] for t in COMMON_SENSE_REASONING_STRING] + [t[1] for t in MMLU_STRING])
262
+
263
+ ## CMMLU ##
264
+ class CustomCMMLUEvaluationTask(LightevalTaskConfig):
265
+ def __init__(
266
+ self,
267
+ name,
268
+ prompt_function="cmmlu_prompt",
269
+ hf_repo="cmmlu",
270
+ hf_subset=None,
271
+ # metric=[Metrics.loglikelihood_acc_single_token],
272
+ metric=[Metrics.loglikelihood_acc, Metrics.loglikelihood_acc_norm_nospace],
273
+ hf_avail_splits=None,
274
+ evaluation_splits=["test"],
275
+ few_shots_split="validation",
276
+ few_shots_select=None,
277
+ suite=None,
278
+ generation_size=-1,
279
+ stop_sequence=None,
280
+ output_regex=None,
281
+ frozen=False,
282
+ ):
283
+ super().__init__(
284
+ name=name,
285
+ prompt_function=prompt_function,
286
+ hf_repo=hf_repo,
287
+ hf_subset=hf_subset,
288
+ metric=metric,
289
+ hf_avail_splits=hf_avail_splits,
290
+ evaluation_splits=evaluation_splits,
291
+ few_shots_split=few_shots_split,
292
+ few_shots_select=few_shots_select,
293
+ suite=suite,
294
+ generation_size=generation_size,
295
+ stop_sequence=stop_sequence,
296
+ output_regex=output_regex,
297
+ frozen=frozen,
298
+ )
299
+
300
+
301
+ CMMLU_TASKS = [
302
+ CustomCMMLUEvaluationTask(name="cmmlu:agronomy", hf_subset="agronomy"),
303
+ CustomCMMLUEvaluationTask(name="cmmlu:anatomy", hf_subset="anatomy"),
304
+ CustomCMMLUEvaluationTask(name="cmmlu:ancient_chinese", hf_subset="ancient_chinese"),
305
+ CustomCMMLUEvaluationTask(name="cmmlu:arts", hf_subset="arts"),
306
+ CustomCMMLUEvaluationTask(name="cmmlu:astronomy", hf_subset="astronomy"),
307
+ CustomCMMLUEvaluationTask(name="cmmlu:business_ethics", hf_subset="business_ethics"),
308
+ CustomCMMLUEvaluationTask(name="cmmlu:chinese_civil_service_exam", hf_subset="chinese_civil_service_exam"),
309
+ CustomCMMLUEvaluationTask(name="cmmlu:chinese_driving_rule", hf_subset="chinese_driving_rule"),
310
+ CustomCMMLUEvaluationTask(name="cmmlu:chinese_food_culture", hf_subset="chinese_food_culture"),
311
+ CustomCMMLUEvaluationTask(name="cmmlu:chinese_foreign_policy", hf_subset="chinese_foreign_policy"),
312
+ CustomCMMLUEvaluationTask(name="cmmlu:chinese_history", hf_subset="chinese_history"),
313
+ CustomCMMLUEvaluationTask(name="cmmlu:chinese_literature", hf_subset="chinese_literature"),
314
+ CustomCMMLUEvaluationTask(name="cmmlu:chinese_teacher_qualification", hf_subset="chinese_teacher_qualification"),
315
+ CustomCMMLUEvaluationTask(name="cmmlu:clinical_knowledge", hf_subset="clinical_knowledge"),
316
+ CustomCMMLUEvaluationTask(name="cmmlu:college_actuarial_science", hf_subset="college_actuarial_science"),
317
+ CustomCMMLUEvaluationTask(name="cmmlu:college_education", hf_subset="college_education"),
318
+ CustomCMMLUEvaluationTask(name="cmmlu:college_engineering_hydrology", hf_subset="college_engineering_hydrology"),
319
+ CustomCMMLUEvaluationTask(name="cmmlu:college_law", hf_subset="college_law"),
320
+ CustomCMMLUEvaluationTask(name="cmmlu:college_mathematics", hf_subset="college_mathematics"),
321
+ CustomCMMLUEvaluationTask(name="cmmlu:college_medical_statistics", hf_subset="college_medical_statistics"),
322
+ CustomCMMLUEvaluationTask(name="cmmlu:college_medicine", hf_subset="college_medicine"),
323
+ CustomCMMLUEvaluationTask(name="cmmlu:computer_science", hf_subset="computer_science"),
324
+ CustomCMMLUEvaluationTask(name="cmmlu:computer_security", hf_subset="computer_security"),
325
+ CustomCMMLUEvaluationTask(name="cmmlu:conceptual_physics", hf_subset="conceptual_physics"),
326
+ CustomCMMLUEvaluationTask(name="cmmlu:construction_project_management", hf_subset="construction_project_management"),
327
+ CustomCMMLUEvaluationTask(name="cmmlu:economics", hf_subset="economics"),
328
+ CustomCMMLUEvaluationTask(name="cmmlu:education", hf_subset="education"),
329
+ CustomCMMLUEvaluationTask(name="cmmlu:electrical_engineering", hf_subset="electrical_engineering"),
330
+ CustomCMMLUEvaluationTask(name="cmmlu:elementary_chinese", hf_subset="elementary_chinese"),
331
+ CustomCMMLUEvaluationTask(name="cmmlu:elementary_commonsense", hf_subset="elementary_commonsense"),
332
+ CustomCMMLUEvaluationTask(name="cmmlu:elementary_information_and_technology", hf_subset="elementary_information_and_technology"),
333
+ CustomCMMLUEvaluationTask(name="cmmlu:elementary_mathematics", hf_subset="elementary_mathematics"),
334
+ CustomCMMLUEvaluationTask(name="cmmlu:ethnology", hf_subset="ethnology"),
335
+ CustomCMMLUEvaluationTask(name="cmmlu:food_science", hf_subset="food_science"),
336
+ CustomCMMLUEvaluationTask(name="cmmlu:genetics", hf_subset="genetics"),
337
+ CustomCMMLUEvaluationTask(name="cmmlu:global_facts", hf_subset="global_facts"),
338
+ CustomCMMLUEvaluationTask(name="cmmlu:high_school_biology", hf_subset="high_school_biology"),
339
+ CustomCMMLUEvaluationTask(name="cmmlu:high_school_chemistry", hf_subset="high_school_chemistry"),
340
+ CustomCMMLUEvaluationTask(name="cmmlu:high_school_geography", hf_subset="high_school_geography"),
341
+ CustomCMMLUEvaluationTask(name="cmmlu:high_school_mathematics", hf_subset="high_school_mathematics"),
342
+ CustomCMMLUEvaluationTask(name="cmmlu:high_school_physics", hf_subset="high_school_physics"),
343
+ CustomCMMLUEvaluationTask(name="cmmlu:high_school_politics", hf_subset="high_school_politics"),
344
+ CustomCMMLUEvaluationTask(name="cmmlu:human_sexuality", hf_subset="human_sexuality"),
345
+ CustomCMMLUEvaluationTask(name="cmmlu:international_law", hf_subset="international_law"),
346
+ CustomCMMLUEvaluationTask(name="cmmlu:journalism", hf_subset="journalism"),
347
+ CustomCMMLUEvaluationTask(name="cmmlu:jurisprudence", hf_subset="jurisprudence"),
348
+ CustomCMMLUEvaluationTask(name="cmmlu:legal_and_moral_basis", hf_subset="legal_and_moral_basis"),
349
+ CustomCMMLUEvaluationTask(name="cmmlu:logical", hf_subset="logical"),
350
+ CustomCMMLUEvaluationTask(name="cmmlu:machine_learning", hf_subset="machine_learning"),
351
+ CustomCMMLUEvaluationTask(name="cmmlu:management", hf_subset="management"),
352
+ CustomCMMLUEvaluationTask(name="cmmlu:marketing", hf_subset="marketing"),
353
+ CustomCMMLUEvaluationTask(name="cmmlu:marxist_theory", hf_subset="marxist_theory"),
354
+ CustomCMMLUEvaluationTask(name="cmmlu:modern_chinese", hf_subset="modern_chinese"),
355
+ CustomCMMLUEvaluationTask(name="cmmlu:nutrition", hf_subset="nutrition"),
356
+ CustomCMMLUEvaluationTask(name="cmmlu:philosophy", hf_subset="philosophy"),
357
+ CustomCMMLUEvaluationTask(name="cmmlu:professional_accounting", hf_subset="professional_accounting"),
358
+ CustomCMMLUEvaluationTask(name="cmmlu:professional_law", hf_subset="professional_law"),
359
+ CustomCMMLUEvaluationTask(name="cmmlu:professional_medicine", hf_subset="professional_medicine"),
360
+ CustomCMMLUEvaluationTask(name="cmmlu:professional_psychology", hf_subset="professional_psychology"),
361
+ CustomCMMLUEvaluationTask(name="cmmlu:public_relations", hf_subset="public_relations"),
362
+ CustomCMMLUEvaluationTask(name="cmmlu:security_study", hf_subset="security_study"),
363
+ CustomCMMLUEvaluationTask(name="cmmlu:sociology", hf_subset="sociology"),
364
+ CustomCMMLUEvaluationTask(name="cmmlu:sports_science", hf_subset="sports_science"),
365
+ CustomCMMLUEvaluationTask(name="cmmlu:traditional_chinese_medicine", hf_subset="traditional_chinese_medicine"),
366
+ CustomCMMLUEvaluationTask(name="cmmlu:virology", hf_subset="virology"),
367
+ CustomCMMLUEvaluationTask(name="cmmlu:world_history", hf_subset="world_history"),
368
+ CustomCMMLUEvaluationTask(name="cmmlu:world_religions", hf_subset="world_religions")
369
+ ]
370
+
371
+
372
+ CMMLU_SUBJECT_MAPPING = {
373
+ 'agronomy': '农学',
374
+ 'anatomy': '解剖学',
375
+ 'ancient_chinese': '古汉语',
376
+ 'arts': '艺术学',
377
+ 'astronomy': '天文学',
378
+ 'business_ethics': '商业伦理',
379
+ 'chinese_civil_service_exam': '中国公务员考试',
380
+ 'chinese_driving_rule': '中国驾驶规则',
381
+ 'chinese_food_culture': '中国饮食文化',
382
+ 'chinese_foreign_policy': '中国外交政策',
383
+ 'chinese_history': '中国历史',
384
+ 'chinese_literature': '中国文学',
385
+ 'chinese_teacher_qualification': '中国教师资格',
386
+ 'clinical_knowledge': '临床知识',
387
+ 'college_actuarial_science': '大学精算学',
388
+ 'college_education': '大学教育学',
389
+ 'college_engineering_hydrology': '大学工程水文学',
390
+ 'college_law': '大学法律',
391
+ 'college_mathematics': '大学数学',
392
+ 'college_medical_statistics': '大学医学统计',
393
+ 'college_medicine': '大学医学',
394
+ 'computer_science': '计算机科学',
395
+ 'computer_security': '计算机安全',
396
+ 'conceptual_physics': '概念物理学',
397
+ 'construction_project_management': '建设工程管理',
398
+ 'economics': '经济学',
399
+ 'education': '教育学',
400
+ 'electrical_engineering': '电气工程',
401
+ 'elementary_chinese': '小学语文',
402
+ 'elementary_commonsense': '小学常识',
403
+ 'elementary_information_and_technology': '小学信息技术',
404
+ 'elementary_mathematics': '初等数学',
405
+ 'ethnology': '民族学',
406
+ 'food_science': '食品科学',
407
+ 'genetics': '遗传学',
408
+ 'global_facts': '全球事实',
409
+ 'high_school_biology': '高中生物',
410
+ 'high_school_chemistry': '高中化学',
411
+ 'high_school_geography': '高中地理',
412
+ 'high_school_mathematics': '高中数学',
413
+ 'high_school_physics': '高中物理学',
414
+ 'high_school_politics': '高中政治',
415
+ 'human_sexuality': '人类性行为',
416
+ 'international_law': '国际法学',
417
+ 'journalism': '新闻学',
418
+ 'jurisprudence': '法理学',
419
+ 'legal_and_moral_basis': '法律与道德基础',
420
+ 'logical': '逻辑学',
421
+ 'machine_learning': '机器学习',
422
+ 'management': '管理学',
423
+ 'marketing': '市场营销',
424
+ 'marxist_theory': '马克思主义理论',
425
+ 'modern_chinese': '现代汉语',
426
+ 'nutrition': '营养学',
427
+ 'philosophy': '哲学',
428
+ 'professional_accounting': '专业会计',
429
+ 'professional_law': '专业法学',
430
+ 'professional_medicine': '专业医学',
431
+ 'professional_psychology': '专业心理学',
432
+ 'public_relations': '公共关系',
433
+ 'security_study': '安全研究',
434
+ 'sociology': '社会学',
435
+ 'sports_science': '体育学',
436
+ 'traditional_chinese_medicine': '中医中药',
437
+ 'virology': '病毒学',
438
+ 'world_history': '世界历史',
439
+ 'world_religions': '世界宗教'
440
+ }
441
+
442
+
443
+ def cmmlu_prompt(line, task_name: str = None):
444
+ """CMMLU prompt without letters"""
445
+ topic = line["subject"]
446
+ _ch_name = CMMLU_SUBJECT_MAPPING[topic]
447
+ prompt = f"以下是关于{_ch_name}的单项选择题,请直接给出正确答案的选项。\n题目:"
448
+ prompt += line["question"] + "\n答案是:"
449
+
450
+ return Doc(
451
+ task_name=task_name,
452
+ query=prompt,
453
+ choices=[f" {c}" for c in line["choices"]],
454
+ gold_index=line["answer"],
455
+ instruction=f"以下是关于{_ch_name}的单项选择题,请直接给出正确答案的选项。\n",
456
+ )
457
+
458
+
459
+ CMMLU_STRING = [(t, f"custom|{t.name}|0|1") for t in CMMLU_TASKS]
460
+ _TASKS_STRINGS.extend(CMMLU_STRING)
461
+ _TASKS += CMMLU_TASKS
462
+
463
+ # cmmlu
464
+ EARLY_SIGNAL_TASKS = ",".join([t[1] for t in COMMON_SENSE_REASONING_STRING] + [t[1] for t in CMMLU_STRING])
465
+
466
+
467
+ ## CEVAL ##
468
+ class CustomCEVALEvaluationTask(LightevalTaskConfig):
469
+ def __init__(
470
+ self,
471
+ name,
472
+ prompt_function="ceval_prompt",
473
+ hf_repo="ceval",
474
+ hf_subset=None,
475
+ # metric=[Metrics.loglikelihood_acc_single_token],
476
+ metric=[Metrics.loglikelihood_acc, Metrics.loglikelihood_acc_norm_nospace],
477
+ hf_avail_splits=None,
478
+ evaluation_splits=["test"],
479
+ few_shots_split="validation",
480
+ few_shots_select=None,
481
+ suite=None,
482
+ generation_size=-1,
483
+ stop_sequence=None,
484
+ output_regex=None,
485
+ frozen=False,
486
+ ):
487
+ super().__init__(
488
+ name=name,
489
+ prompt_function=prompt_function,
490
+ hf_repo=hf_repo,
491
+ hf_subset=hf_subset,
492
+ metric=metric,
493
+ hf_avail_splits=hf_avail_splits,
494
+ evaluation_splits=evaluation_splits,
495
+ few_shots_split=few_shots_split,
496
+ few_shots_select=few_shots_select,
497
+ suite=suite,
498
+ generation_size=generation_size,
499
+ stop_sequence=stop_sequence,
500
+ output_regex=output_regex,
501
+ frozen=frozen,
502
+ )
503
+
504
+
505
+ CEVAL_TASKS = [
506
+ CustomCEVALEvaluationTask(name="ceval:accountant",hf_subset="accountant"),
507
+ CustomCEVALEvaluationTask(name="ceval:advanced_mathematics", hf_subset="advanced_mathematics"),
508
+ CustomCEVALEvaluationTask(name="ceval:art_studies", hf_subset="art_studies"),
509
+ CustomCEVALEvaluationTask(name="ceval:basic_medicine", hf_subset="basic_medicine"),
510
+ CustomCEVALEvaluationTask(name="ceval:business_administration", hf_subset="business_administration"),
511
+ CustomCEVALEvaluationTask(name="ceval:chinese_language_and_literature", hf_subset="chinese_language_and_literature"),
512
+ CustomCEVALEvaluationTask(name="ceval:civil_servant", hf_subset="civil_servant"),
513
+ CustomCEVALEvaluationTask(name="ceval:clinical_medicine", hf_subset="clinical_medicine"),
514
+ CustomCEVALEvaluationTask(name="ceval:college_chemistry", hf_subset="college_chemistry"),
515
+ CustomCEVALEvaluationTask(name="ceval:college_economics", hf_subset="college_economics"),
516
+ CustomCEVALEvaluationTask(name="ceval:college_physics", hf_subset="college_physics"),
517
+ CustomCEVALEvaluationTask(name="ceval:college_programming", hf_subset="college_programming"),
518
+ CustomCEVALEvaluationTask(name="ceval:computer_architecture", hf_subset="computer_architecture"),
519
+ CustomCEVALEvaluationTask(name="ceval:computer_network", hf_subset="computer_network"),
520
+ CustomCEVALEvaluationTask(name="ceval:discrete_mathematics", hf_subset="discrete_mathematics"),
521
+ CustomCEVALEvaluationTask(name="ceval:education_science", hf_subset="education_science"),
522
+ CustomCEVALEvaluationTask(name="ceval:electrical_engineer", hf_subset="electrical_engineer"),
523
+ CustomCEVALEvaluationTask(name="ceval:environmental_impact_assessment_engineer", hf_subset="environmental_impact_assessment_engineer"),
524
+ CustomCEVALEvaluationTask(name="ceval:fire_engineer", hf_subset="fire_engineer"),
525
+ CustomCEVALEvaluationTask(name="ceval:high_school_biology", hf_subset="high_school_biology"),
526
+ CustomCEVALEvaluationTask(name="ceval:high_school_chemistry", hf_subset="high_school_chemistry"),
527
+ CustomCEVALEvaluationTask(name="ceval:high_school_chinese", hf_subset="high_school_chinese"),
528
+ CustomCEVALEvaluationTask(name="ceval:high_school_geography", hf_subset="high_school_geography"),
529
+ CustomCEVALEvaluationTask(name="ceval:high_school_history", hf_subset="high_school_history"),
530
+ CustomCEVALEvaluationTask(name="ceval:high_school_mathematics", hf_subset="high_school_mathematics"),
531
+ CustomCEVALEvaluationTask(name="ceval:high_school_physics", hf_subset="high_school_physics"),
532
+ CustomCEVALEvaluationTask(name="ceval:high_school_politics", hf_subset="high_school_politics"),
533
+ CustomCEVALEvaluationTask(name="ceval:ideological_and_moral_cultivation", hf_subset="ideological_and_moral_cultivation"),
534
+ CustomCEVALEvaluationTask(name="ceval:law", hf_subset="law"),
535
+ CustomCEVALEvaluationTask(name="ceval:legal_professional", hf_subset="legal_professional"),
536
+ CustomCEVALEvaluationTask(name="ceval:logic", hf_subset="logic"),
537
+ CustomCEVALEvaluationTask(name="ceval:mao_zedong_thought", hf_subset="mao_zedong_thought"),
538
+ CustomCEVALEvaluationTask(name="ceval:marxism", hf_subset="marxism"),
539
+ CustomCEVALEvaluationTask(name="ceval:metrology_engineer", hf_subset="metrology_engineer"),
540
+ CustomCEVALEvaluationTask(name="ceval:middle_school_biology", hf_subset="middle_school_biology"),
541
+ CustomCEVALEvaluationTask(name="ceval:middle_school_chemistry", hf_subset="middle_school_chemistry"),
542
+ CustomCEVALEvaluationTask(name="ceval:middle_school_geography", hf_subset="middle_school_geography"),
543
+ CustomCEVALEvaluationTask(name="ceval:middle_school_history", hf_subset="middle_school_history"),
544
+ CustomCEVALEvaluationTask(name="ceval:middle_school_mathematics", hf_subset="middle_school_mathematics"),
545
+ CustomCEVALEvaluationTask(name="ceval:middle_school_physics", hf_subset="middle_school_physics"),
546
+ CustomCEVALEvaluationTask(name="ceval:middle_school_politics", hf_subset="middle_school_politics"),
547
+ CustomCEVALEvaluationTask(name="ceval:modern_chinese_history", hf_subset="modern_chinese_history"),
548
+ CustomCEVALEvaluationTask(name="ceval:operating_system", hf_subset="operating_system"),
549
+ CustomCEVALEvaluationTask(name="ceval:physician", hf_subset="physician"),
550
+ CustomCEVALEvaluationTask(name="ceval:plant_protection", hf_subset="plant_protection"),
551
+ CustomCEVALEvaluationTask(name="ceval:probability_and_statistics", hf_subset="probability_and_statistics"),
552
+ CustomCEVALEvaluationTask(name="ceval:professional_tour_guide", hf_subset="professional_tour_guide"),
553
+ CustomCEVALEvaluationTask(name="ceval:sports_science", hf_subset="sports_science"),
554
+ CustomCEVALEvaluationTask(name="ceval:tax_accountant", hf_subset="tax_accountant"),
555
+ CustomCEVALEvaluationTask(name="ceval:teacher_qualification", hf_subset="teacher_qualification"),
556
+ CustomCEVALEvaluationTask(name="ceval:urban_and_rural_planner", hf_subset="urban_and_rural_planner"),
557
+ CustomCEVALEvaluationTask(name="ceval:veterinary_medicine", hf_subset="veterinary_medicine")
558
+ ]
559
+
560
+
561
+ CEVAL_SUBJECT_MAPPING = {
562
+ 'computer_network': ['Computer Network', '计算机网络', 'STEM'],
563
+ 'operating_system': ['Operating System', '操作系统', 'STEM'],
564
+ 'computer_architecture': ['Computer Architecture', '计算机组成', 'STEM'],
565
+ 'college_programming': ['College Programming', '大学编程', 'STEM'],
566
+ 'college_physics': ['College Physics', '大学物理', 'STEM'],
567
+ 'college_chemistry': ['College Chemistry', '大学化学', 'STEM'],
568
+ 'advanced_mathematics': ['Advanced Mathematics', '高等数学', 'STEM'],
569
+ 'probability_and_statistics': ['Probability and Statistics', '概率统计', 'STEM'],
570
+ 'discrete_mathematics': ['Discrete Mathematics', '离散数学', 'STEM'],
571
+ 'electrical_engineer': ['Electrical Engineer', '注册电气工程师', 'STEM'],
572
+ 'metrology_engineer': ['Metrology Engineer', '注册计量师', 'STEM'],
573
+ 'high_school_mathematics': ['High School Mathematics', '高中数学', 'STEM'],
574
+ 'high_school_physics': ['High School Physics', '高中物理', 'STEM'],
575
+ 'high_school_chemistry': ['High School Chemistry', '高中化学', 'STEM'],
576
+ 'high_school_biology': ['High School Biology', '高中生物', 'STEM'],
577
+ 'middle_school_mathematics': ['Middle School Mathematics', '初中数学', 'STEM'],
578
+ 'middle_school_biology': ['Middle School Biology', '初中生物', 'STEM'],
579
+ 'middle_school_physics': ['Middle School Physics', '初中物理', 'STEM'],
580
+ 'middle_school_chemistry': ['Middle School Chemistry', '初中化学', 'STEM'],
581
+ 'veterinary_medicine': ['Veterinary Medicine', '兽医学', 'STEM'],
582
+ 'college_economics': ['College Economics', '大学经济学', 'Social Science'],
583
+ 'business_administration': ['Business Administration', '工商管理', 'Social Science'],
584
+ 'marxism': ['Marxism', '马克思主义基本原理', 'Social Science'],
585
+ 'mao_zedong_thought': ['Mao Zedong Thought', '毛泽东思想和中国特色社会主义理论体系概论', 'Social Science'],
586
+ 'education_science': ['Education Science', '教育学', 'Social Science'],
587
+ 'teacher_qualification': ['Teacher Qualification', '教师资格', 'Social Science'],
588
+ 'high_school_politics': ['High School Politics', '高中政治', 'Social Science'],
589
+ 'high_school_geography': ['High School Geography', '高中地理', 'Social Science'],
590
+ 'middle_school_politics': ['Middle School Politics', '初中政治', 'Social Science'],
591
+ 'middle_school_geography': ['Middle School Geography', '初中地理', 'Social Science'],
592
+ 'modern_chinese_history': ['Modern Chinese History', '近代史纲要', 'Humanities'],
593
+ 'ideological_and_moral_cultivation': ['Ideological and Moral Cultivation', '思想道德修养与法律基础', 'Humanities'],
594
+ 'logic': ['Logic', '逻辑学', 'Humanities'],
595
+ 'law': ['Law', '法学', 'Humanities'],
596
+ 'chinese_language_and_literature': ['Chinese Language and Literature', '中国语言文学', 'Humanities'],
597
+ 'art_studies': ['Art Studies', '艺术学', 'Humanities'],
598
+ 'professional_tour_guide': ['Professional Tour Guide', '导游资格', 'Humanities'],
599
+ 'legal_professional': ['Legal Professional', '法律职业资格', 'Humanities'],
600
+ 'high_school_chinese': ['High School Chinese', '高中语文', 'Humanities'],
601
+ 'high_school_history': ['High School History', '高中历史', 'Humanities'],
602
+ 'middle_school_history': ['Middle School History', '初中历史', 'Humanities'],
603
+ 'civil_servant': ['Civil Servant', '公务员', 'Other'],
604
+ 'sports_science': ['Sports Science', '体育学', 'Other'],
605
+ 'plant_protection': ['Plant Protection', '植物保护', 'Other'],
606
+ 'basic_medicine': ['Basic Medicine', '基础医学', 'Other'],
607
+ 'clinical_medicine': ['Clinical Medicine', '临床医学', 'Other'],
608
+ 'urban_and_rural_planner': ['Urban and Rural Planner', '注册城乡规划师', 'Other'],
609
+ 'accountant': ['Accountant', '注册会计师', 'Other'],
610
+ 'fire_engineer': ['Fire Engineer', '注册消防工程师', 'Other'],
611
+ 'environmental_impact_assessment_engineer': ['Environmental Impact Assessment Engineer', '环境影响评价工程师', 'Other'],
612
+ 'tax_accountant': ['Tax Accountant', '税务师', 'Other'],
613
+ 'physician': ['Physician', '医师资格', 'Other'],
614
+ }
615
+
616
+ def ceval_prompt(line, task_name: str = None):
617
+ """CEVAL prompt without letters"""
618
+ topic = line["subject"]
619
+ _ch_name = CEVAL_SUBJECT_MAPPING[topic][1]
620
+ prompt = f"以下是关于{_ch_name}的单项选择题,请直接给出正确答案的选项。\n题目:"
621
+ prompt += line["question"] + "\n答案是:"
622
+
623
+ return Doc(
624
+ task_name=task_name,
625
+ query=prompt,
626
+ choices=[f" {c}" for c in line["choices"]],
627
+ gold_index=line["answer"],
628
+ instruction=f"以下是关于{_ch_name}的单项选择题,请直接给出正确答案的选项。\n",
629
+ )
630
+
631
+
632
+ CEVAL_STRING = [(t, f"custom|{t.name}|0|1") for t in CEVAL_TASKS]
633
+ _TASKS_STRINGS.extend(CEVAL_STRING)
634
+ _TASKS += CEVAL_TASKS
635
+
636
+ # ceval
637
+ EARLY_SIGNAL_TASKS = ",".join([t[1] for t in COMMON_SENSE_REASONING_STRING] + [t[1] for t in CEVAL_STRING])
638
+
639
+
640
+ # Convert to dict for lighteval
641
+ TASKS_TABLE = [task.as_dict() for task in _TASKS]
642
+ # You can have a few pre-organised groups of tasks
643
+ TASKS_GROUPS = {
644
+ "early-signal": EARLY_SIGNAL_TASKS,
645
+ }