KnutJaegersberg commited on
Commit
daa053d
·
verified ·
1 Parent(s): 4d546cc

Upload 125 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +556 -3
  2. fdc_level2=02/train-00001-of-00437.parquet +3 -0
  3. fdc_level2=02/train-00010-of-00437.parquet +3 -0
  4. fdc_level2=02/train-00047-of-00437.parquet +3 -0
  5. fdc_level2=02/train-00142-of-00437.parquet +3 -0
  6. fdc_level2=02/train-00250-of-00437.parquet +3 -0
  7. fdc_level2=02/train-00352-of-00437.parquet +3 -0
  8. fdc_level2=02/train-00436-of-00437.parquet +3 -0
  9. fdc_level2=05/train-00000-of-00001.parquet +3 -0
  10. fdc_level2=06/train-00000-of-00002.parquet +3 -0
  11. fdc_level2=06/train-00001-of-00002.parquet +3 -0
  12. fdc_level2=07/train-00001-of-00095.parquet +3 -0
  13. fdc_level2=07/train-00049-of-00095.parquet +3 -0
  14. fdc_level2=07/train-00094-of-00095.parquet +3 -0
  15. fdc_level2=08/train-00000-of-00001.parquet +3 -0
  16. fdc_level2=09/train-00000-of-00001.parquet +3 -0
  17. fdc_level2=11/train-00000-of-00001.parquet +3 -0
  18. fdc_level2=13/train-00002-of-00184.parquet +3 -0
  19. fdc_level2=13/train-00051-of-00184.parquet +3 -0
  20. fdc_level2=13/train-00182-of-00184.parquet +3 -0
  21. fdc_level2=14/train-00000-of-00001.parquet +3 -0
  22. fdc_level2=15/train-00001-of-00213.parquet +3 -0
  23. fdc_level2=15/train-00049-of-00213.parquet +3 -0
  24. fdc_level2=15/train-00149-of-00213.parquet +3 -0
  25. fdc_level2=16/train-00000-of-00002.parquet +3 -0
  26. fdc_level2=16/train-00001-of-00002.parquet +3 -0
  27. fdc_level2=17/train-00002-of-00028.parquet +3 -0
  28. fdc_level2=17/train-00026-of-00028.parquet +3 -0
  29. fdc_level2=18/train-00000-of-00003.parquet +3 -0
  30. fdc_level2=18/train-00001-of-00003.parquet +3 -0
  31. fdc_level2=18/train-00002-of-00003.parquet +3 -0
  32. fdc_level2=19/train-00002-of-00014.parquet +3 -0
  33. fdc_level2=19/train-00006-of-00014.parquet +3 -0
  34. fdc_level2=19/train-00012-of-00014.parquet +3 -0
  35. fdc_level2=21/train-00000-of-00001.parquet +3 -0
  36. fdc_level2=22/train-00002-of-00158.parquet +3 -0
  37. fdc_level2=22/train-00049-of-00158.parquet +3 -0
  38. fdc_level2=22/train-00149-of-00158.parquet +3 -0
  39. fdc_level2=23/train-00002-of-00176.parquet +3 -0
  40. fdc_level2=23/train-00048-of-00176.parquet +3 -0
  41. fdc_level2=23/train-00148-of-00176.parquet +3 -0
  42. fdc_level2=24/train-00003-of-00152.parquet +3 -0
  43. fdc_level2=24/train-00049-of-00152.parquet +3 -0
  44. fdc_level2=24/train-00149-of-00152.parquet +3 -0
  45. fdc_level2=25/train-00001-of-00013.parquet +3 -0
  46. fdc_level2=25/train-00007-of-00013.parquet +3 -0
  47. fdc_level2=25/train-00012-of-00013.parquet +3 -0
  48. fdc_level2=26/train-00002-of-00064.parquet +3 -0
  49. fdc_level2=26/train-00048-of-00064.parquet +3 -0
  50. fdc_level2=26/train-00062-of-00064.parquet +3 -0
README.md CHANGED
@@ -1,3 +1,556 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # 🌐 Essential-Web: FDC Level-2 Partitioned Dataset
5
+
6
+ ## 📋 Dataset Description
7
+
8
+ This dataset contains a smol sample from [**Essential-Web**](https://huggingface.co/datasets/EssentialAI/essential-web), partitioned by Free Decimal Correspondence (FDC) level-2 categories. [**Essential-Web**](https://huggingface.co/datasets/EssentialAI/essential-web) is a 24-trillion-token web dataset with extensive document-level metadata designed to enable rapid dataset curation through SQL-like filtering.
9
+
10
+ ## 🔍 Free Decimal Correspondence (FDC)
11
+
12
+ The FDC taxonomy is an open classification system inspired by the Dewey Decimal System. Level-2 categories provide broad subject matter classifications that enable researchers to quickly identify and filter relevant content domains.
13
+
14
+ For help navigating FDC codes, see: https://www.librarything.com/mds
15
+
16
+ ## ⚙️ Dataset Creation
17
+
18
+ The source documents were classified using EAI-Taxonomy-0.5b, a classifier trained on synthetic labels generated by open-weight LLMs. The classification process involved inference across 23.6 billion web documents, requiring approximately 90,000 AMD MI300x GPU-hours.
19
+
20
+ ## 🎯 Performance
21
+
22
+ Datasets curated from [**Essential-Web**](https://huggingface.co/datasets/EssentialAI/essential-web) using simple metadata filters have demonstrated competitive performance relative to top performing web-curated datasets:
23
+ - 🧮 **Math**: within 8.0% of web-curated baselines
24
+ - 💻 **Web Code**: 14.3% above web-curated baselines
25
+ - 🔬 **STEM**: 24.5% above web-curated baselines
26
+ - 🩺 **Medical**: 8.6% above web-curated baselines
27
+
28
+ ## 🏗️ Dataset Structure
29
+
30
+ The dataset is organized by FDC level-2 categories, which provide a Dewey Decimal-inspired taxonomy for classifying web content by subject matter. Files are organized in the `data/` directory with partitions like:
31
+
32
+ ```
33
+ data/fdc_level=02/
34
+ data/fdc_level=05/
35
+ data/fdc_level=10/
36
+ ...
37
+ ```
38
+
39
+ Each partition contains documents labeled with their corresponding FDC classification along with associated taxonomy metadata.
40
+
41
+ # Dataset Schema Documentation
42
+
43
+ ## Overview
44
+
45
+ This dataset contains web-crawled text data with comprehensive metadata, quality signals, and taxonomic classifications. Each record represents a document extracted from web archives with detailed provenance tracking and quality assessment metrics.
46
+
47
+ ## Core Fields
48
+
49
+ | Field | Type | Description | Path |
50
+ |-------|------|-------------|------|
51
+ | `id` | `Int64` | Unique identifier based on document hash | `id` |
52
+ | `text` | `String` | The main textual content of the document | `text` |
53
+
54
+ ## EAI Taxonomy Classification
55
+
56
+ Comprehensive hierarchical classification system with primary and secondary labels - the most important feature of this dataset. The taxonomy is designed to provide detailed subject categorization, document type identification, content quality assessment, and extraction quality indicators.
57
+
58
+ <details>
59
+ <summary><strong>Free Decimal Correspondence (FDC)</strong></summary>
60
+
61
+ A Dewey Decimal-inspired classification system with 3-level hierarchical labels. The FDC provides nested categories where each successive level refines its parent category. It's designed to be compatible with the Dewey Decimal System for library cataloging.
62
+
63
+ **Level Structure:**
64
+ - **Level 1**: Top-level categories (0-9) covering broad subject areas like General works, Philosophy, Religion, Social Sciences, etc.
65
+ - **Level 2**: Sub-divisions (00-99) that refine Level 1 categories
66
+ - **Level 3**: Specific categories (000-999) that further refine Level 2 categories
67
+
68
+ | Component | Description | Path |
69
+ |-----------|-------------|------|
70
+ | Primary Code | Main classification code | `eai_taxonomy.free_decimal_correspondence.primary.code` |
71
+ | Primary Level 1 | Top-level category (0=General works, 1=Philosophy, 2=Religion, 3=Social Sciences, 4=Language, 5=Science, 6=Technology, 7=Arts, 8=Literature, 9=History/Geography) | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_1` |
72
+ | Primary Level 2 | Mid-level category | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_2` |
73
+ | Primary Level 3 | Specific category | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_3` |
74
+ | Secondary Code | Alternative classification code | `eai_taxonomy.free_decimal_correspondence.secondary.code` |
75
+ | Secondary Level 1 | Alternative top-level category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_1` |
76
+ | Secondary Level 2 | Alternative mid-level category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_2` |
77
+ | Secondary Level 3 | Alternative specific category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_3` |
78
+
79
+ We recommend this viewer for easily navigating the FDC categories when curating filters: https://www.librarything.com/mds
80
+
81
+ </details>
82
+
83
+ <details>
84
+ <summary><strong>Bloom's Taxonomy Integration</strong></summary>
85
+
86
+ Based on Anderson and Krathwohl's 2001 revision of Bloom's Taxonomy of Educational Objectives, providing two complementary categorization dimensions for educational content analysis.
87
+
88
+ ### Knowledge Domain
89
+ Categorizes the type of knowledge demonstrated in the document:
90
+
91
+ | Component | Description | Path |
92
+ |-----------|-------------|------|
93
+ | Primary Code | Main knowledge domain code | `eai_taxonomy.bloom_knowledge_domain.primary.code` |
94
+ | Primary Label | Main knowledge domain label | `eai_taxonomy.bloom_knowledge_domain.primary.label` |
95
+ | Secondary Code | Alternative knowledge domain code | `eai_taxonomy.bloom_knowledge_domain.secondary.code` |
96
+ | Secondary Label | Alternative knowledge domain label | `eai_taxonomy.bloom_knowledge_domain.secondary.label` |
97
+
98
+ **Possible Values:**
99
+ | Code | Label | Description |
100
+ |------|-------|-------------|
101
+ | `-1` | Abstain | Unable to determine |
102
+ | `1` | Factual | Basic elements to learn or solve problems |
103
+ | `2` | Conceptual | Interrelationships between basic elements within larger context |
104
+ | `3` | Procedural | Methods and techniques in the discipline |
105
+ | `4` | Metacognitive | Awareness of how learning works in relation to oneself |
106
+
107
+ ### Cognitive Processing Level
108
+ Assesses the learning and thinking skill levels demonstrated by the document author:
109
+
110
+ | Component | Description | Path |
111
+ |-----------|-------------|------|
112
+ | Primary Code | Main cognitive process code | `eai_taxonomy.bloom_cognitive_process.primary.code` |
113
+ | Primary Label | Main cognitive process label | `eai_taxonomy.bloom_cognitive_process.primary.label` |
114
+ | Secondary Code | Alternative cognitive process code | `eai_taxonomy.bloom_cognitive_process.secondary.code` |
115
+ | Secondary Label | Alternative cognitive process label | `eai_taxonomy.bloom_cognitive_process.secondary.label` |
116
+
117
+ **Possible Values:**
118
+ | Code | Label | Description |
119
+ |------|-------|-------------|
120
+ | `-1` | Abstain | Unable to determine |
121
+ | `1` | Remember | Retrieve relevant knowledge from memory |
122
+ | `2` | Understand | Determine meaning of instructional messages |
123
+ | `3` | Apply | Use a procedure in a given situation |
124
+ | `4` | Analyze | Break materials into components and determine relationships |
125
+ | `5` | Evaluate | Make judgments based on criteria and standards |
126
+ | `6` | Create | Create new or original work |
127
+
128
+ </details>
129
+
130
+ <details>
131
+ <summary><strong>Document Characteristics</strong></summary>
132
+
133
+ ### Document Type v1
134
+ In-house classification of common web document types and formats:
135
+
136
+ | Component | Description | Path |
137
+ |-----------|-------------|------|
138
+ | Primary Code | Main document type code | `eai_taxonomy.document_type_v1.primary.code` |
139
+ | Primary Label | Main document type label | `eai_taxonomy.document_type_v1.primary.label` |
140
+ | Secondary Code | Alternative document type code | `eai_taxonomy.document_type_v1.secondary.code` |
141
+ | Secondary Label | Alternative document type label | `eai_taxonomy.document_type_v1.secondary.label` |
142
+
143
+ **Possible Values:**
144
+ | Code | Label | Examples |
145
+ |------|-------|----------|
146
+ | `-1` | Abstain | Unable to classify |
147
+ | `1` | News/Editorial | CNN articles, opinion columns |
148
+ | `2` | Academic/Research | ArXiv papers, research articles |
149
+ | `3` | Reference/Encyclopedic/Educational | FAQs, Wikipedia entries |
150
+ | `4` | Code/Software | GitHub repos, code examples |
151
+ | `5` | Social/Forum | Conversation threads, Q&A boards |
152
+ | `6` | Promotional/Advertisement | Product pages, calls to action |
153
+ | `7` | Search/Directory/Bibliography | Link pages, search results |
154
+ | `8` | Adult/Pornographic | Adult content |
155
+ | `9` | Personal/Misc | Blogs, user profiles |
156
+ | `10` | Machine-Generated | Lorem ipsum, garbled text |
157
+ | `11` | Legal/Regulatory | Contracts, terms of service |
158
+ | `12` | Government/Political | Legislation, press releases |
159
+ | `13` | Literary/Creative | Poems, short stories |
160
+ | `14` | Reviews/Critiques | Film critiques, product reviews |
161
+ | `15` | E-Commerce/Marketplace | eBay listings, Amazon pages |
162
+ | `16` | Images/Videos/Audio | YouTube videos, Imgur pages |
163
+ | `17` | Other/Unclassified | Documents that resist classification |
164
+
165
+ ### Document Type v2
166
+ Updated classification based on WebOrganizer taxonomy with refined categories for improved document classification accuracy:
167
+
168
+ | Component | Description | Path |
169
+ |-----------|-------------|------|
170
+ | Primary Code | Main document type code (v2) | `eai_taxonomy.document_type_v2.primary.code` |
171
+ | Primary Label | Main document type label (v2) | `eai_taxonomy.document_type_v2.primary.label` |
172
+ | Secondary Code | Alternative document type code (v2) | `eai_taxonomy.document_type_v2.secondary.code` |
173
+ | Secondary Label | Alternative document type label (v2) | `eai_taxonomy.document_type_v2.secondary.label` |
174
+
175
+ **Complete Value Mapping:**
176
+ | Code | Label | Examples |
177
+ |------|-------|----------|
178
+ | `-1` | Abstain | Documents requiring human review |
179
+ | `1` | About (Org.) | Company about pages, mission statements |
180
+ | `2` | About (Personal) | Personal bios, LinkedIn profiles |
181
+ | `3` | Academic Writing | Research papers, abstracts, dissertations |
182
+ | `4` | Audio Transcript | Interview transcripts, court records, captions |
183
+ | `5` | Comment Section | Reddit threads, blog comments |
184
+ | `6` | Content Listing | Site maps, product catalogs, directory listings |
185
+ | `7` | Creative Writing | Song lyrics, novel excerpts, poetry |
186
+ | `8` | Documentation | API docs, README files, user manuals |
187
+ | `9` | FAQ | FAQ pages, Q&A lists |
188
+ | `10` | Knowledge Article | Wikipedia articles, Britannica entries |
189
+ | `11` | Legal Notices | Privacy policies, license agreements, terms of service |
190
+ | `12` | Listicle | Buzzfeed-style articles, "Top 10" lists |
191
+ | `13` | News (Org.) | Government blog posts, corporate announcements |
192
+ | `14` | News Article | Newspaper articles, CNN content, breaking news |
193
+ | `15` | Nonfiction Writing | Editorials, obituaries, memoirs, opinion pieces |
194
+ | `16` | Personal Blog | Personal journals, diary entries, lifestyle blogs |
195
+ | `17` | Product Page | Product descriptions, course offerings, sales pages |
196
+ | `18` | Q&A Forum | Quora posts, Stack Exchange discussions |
197
+ | `19` | Spam / Ads | SEO keyword stuffing, promotional spam |
198
+ | `20` | Structured Data | Datasheets, glossaries, JSON files, databases |
199
+ | `21` | Customer Support | Help articles, troubleshooting guides |
200
+ | `22` | Truncated | Paywalled sites, image galleries, partial content |
201
+ | `23` | Tutorial | Cooking recipes, WikiHow pages, step-by-step guides |
202
+ | `24` | User Review | Yelp reviews, TripAdvisor feedback, product reviews |
203
+ | `25` | Other/Unclassified | Miscellaneous documents not fitting other categories |
204
+
205
+ ### Extraction Artifacts
206
+ Assessment of technical extraction quality, identifying issues from HTML-to-text conversion:
207
+
208
+ | Component | Description | Path |
209
+ |-----------|-------------|------|
210
+ | Primary Code | Main extraction artifact code | `eai_taxonomy.extraction_artifacts.primary.code` |
211
+ | Primary Label | Main extraction artifact label | `eai_taxonomy.extraction_artifacts.primary.label` |
212
+ | Secondary Code | Alternative extraction artifact code | `eai_taxonomy.extraction_artifacts.secondary.code` |
213
+ | Secondary Label | Alternative extraction artifact label | `eai_taxonomy.extraction_artifacts.secondary.label` |
214
+
215
+ **Possible Values:**
216
+ | Code | Label | Description |
217
+ |------|-------|-------------|
218
+ | `-1` | Abstain | Unable to determine |
219
+ | `0` | No Artifacts | Clean text with no leftover HTML or irrelevant elements |
220
+ | `1` | Leftover HTML | HTML/code artifacts remaining after extraction |
221
+ | `2` | Text Extraction Errors | Broken math expressions, encoding errors, improperly parsed tables |
222
+ | `3` | Irrelevant Content | Headers, footers, nav menus extracted by mistake |
223
+ | `4` | Indeterminate | Insufficient content to judge |
224
+
225
+ ### Missing Content
226
+ Assessment of content completeness and extraction success:
227
+
228
+ | Component | Description | Path |
229
+ |-----------|-------------|------|
230
+ | Primary Code | Main missing content code | `eai_taxonomy.missing_content.primary.code` |
231
+ | Primary Label | Main missing content label | `eai_taxonomy.missing_content.primary.label` |
232
+ | Secondary Code | Alternative missing content code | `eai_taxonomy.missing_content.secondary.code` |
233
+ | Secondary Label | Alternative missing content label | `eai_taxonomy.missing_content.secondary.label` |
234
+
235
+ **Possible Values:**
236
+ | Code | Label | Description |
237
+ |------|-------|-------------|
238
+ | `-1` | Abstain | Unable to determine |
239
+ | `0` | No Missing Content | Complete and coherent text |
240
+ | `1` | Truncated Snippets | Obvious "...", incomplete paragraphs, cut-off text |
241
+ | `2` | Click Here References | "Download here", "Click here" without linked content |
242
+ | `3` | Incoherent Flow | Unreadable or illogical flow due to missing context |
243
+ | `4` | Missing Images or Figures | Placeholders or references to missing visual content |
244
+ | `5` | Missing Referenced Data | References to absent tables/datasets (e.g., "See Table 3") |
245
+ | `6` | Indeterminate | Insufficient content to judge |
246
+
247
+ ### Text Structure Information
248
+
249
+ | Field | Type | Description | Path |
250
+ |-------|------|-------------|------|
251
+ | Line Start Indices | `List[Int32]` | Starting indices of each line | `line_start_n_end_idx.line_start_idx` |
252
+ | Line End Indices | `List[Int32]` | Ending indices of each line | `line_start_n_end_idx.line_end_idx` |
253
+
254
+ </details>
255
+
256
+ <details>
257
+ <summary><strong>Content Quality Dimensions</strong></summary>
258
+
259
+ Quality assessment inspired by NaturalReasoning and FineWeb efforts to categorize web data by information sophistication.
260
+
261
+ ### Reasoning Depth
262
+ Assesses the complexity and sophistication of logical reasoning in the document:
263
+
264
+ | Component | Description | Path |
265
+ |-----------|-------------|------|
266
+ | Primary Code | Main reasoning depth code | `eai_taxonomy.reasoning_depth.primary.code` |
267
+ | Primary Label | Main reasoning depth label | `eai_taxonomy.reasoning_depth.primary.label` |
268
+ | Secondary Code | Alternative reasoning depth code | `eai_taxonomy.reasoning_depth.secondary.code` |
269
+ | Secondary Label | Alternative reasoning depth label | `eai_taxonomy.reasoning_depth.secondary.label` |
270
+
271
+ **Possible Values:**
272
+ | Code | Label | Description |
273
+ |------|-------|-------------|
274
+ | `-1` | Abstain | Unable to determine |
275
+ | `1` | No Reasoning | Facts present but no evidence of reasoning |
276
+ | `2` | Basic Reasoning | Basic analysis with minimal explanation and summarization |
277
+ | `3` | Intermediate Reasoning | Some logical steps connecting ideas and structured thinking |
278
+ | `4` | Advanced Reasoning | Multi-step reasoning and thorough analysis with well-developed explanations |
279
+ | `5` | Exceptional Reasoning | Novel abstractions, theoretical frameworks, long chain-of-thought, original insights, or proofs |
280
+ | `6` | Indeterminate | Insufficient context to judge |
281
+
282
+ ### Technical Correctness
283
+ Evaluates the accuracy and precision of technical information:
284
+
285
+ | Component | Description | Path |
286
+ |-----------|-------------|------|
287
+ | Primary Code | Main technical correctness code | `eai_taxonomy.technical_correctness.primary.code` |
288
+ | Primary Label | Main technical correctness label | `eai_taxonomy.technical_correctness.primary.label` |
289
+ | Secondary Code | Alternative technical correctness code | `eai_taxonomy.technical_correctness.secondary.code` |
290
+ | Secondary Label | Alternative technical correctness label | `eai_taxonomy.technical_correctness.secondary.label` |
291
+
292
+ **Possible Values:**
293
+ | Code | Label | Description |
294
+ |------|-------|-------------|
295
+ | `-1` | Abstain | Unable to determine |
296
+ | `1` | Technically Flawed | Significant errors undermining content validity |
297
+ | `2` | Partially Correct | Some correctness but contains flaws, omissions, or errors |
298
+ | `3` | Mostly Correct | Technical correctness with minor flaws or incomplete explanations |
299
+ | `4` | Highly Correct | High technical correctness with precise definitions and clear explanations |
300
+ | `5` | Exceptionally Correct | Exceptional technical correctness with formal proofs and flawless content |
301
+ | `6` | Not Applicable/Indeterminate | No technical content or insufficient context |
302
+
303
+ ### Education Level
304
+ Assesses the appropriate educational background required to comprehend the content:
305
+
306
+ | Component | Description | Path |
307
+ |-----------|-------------|------|
308
+ | Primary Code | Main education level code | `eai_taxonomy.education_level.primary.code` |
309
+ | Primary Label | Main education level label | `eai_taxonomy.education_level.primary.label` |
310
+ | Secondary Code | Alternative education level code | `eai_taxonomy.education_level.secondary.code` |
311
+ | Secondary Label | Alternative education level label | `eai_taxonomy.education_level.secondary.label` |
312
+
313
+ **Possible Values:**
314
+ | Code | Label | Description |
315
+ |------|-------|-------------|
316
+ | `-1` | Abstain | Unable to determine |
317
+ | `1` | General Audience | Accessible to anyone with basic literacy; simple terms |
318
+ | `2` | High School Level | Requires high school education; specialized terminology explained for non-experts |
319
+ | `3` | Undergraduate Level | Requires college education; uses specialized terminology and assumes background knowledge |
320
+ | `4` | Graduate/Expert Level | Requires graduate education or domain expertise; assumes deep background knowledge |
321
+ | `5` | Indeterminate | Insufficient content to judge educational level |
322
+
323
+ </details>
324
+
325
+ <details>
326
+ <summary><strong>Metadata</strong></summary>
327
+
328
+ ## Metadata Structure
329
+
330
+ The `metadata` field contains a nested structure with web archive information:
331
+
332
+ | Field | Type | Description | Path |
333
+ |-------|------|-------------|------|
334
+ | **URL Information** | | | |
335
+ | URL | `String` | Original URL of the document | `metadata.url` |
336
+ | Source Domain | `String` | Domain name of the source | `metadata.source_domain` |
337
+ | Snapshot ID | `String` | Identifier for the web archive snapshot | `metadata.snapshot_id` |
338
+ | **WARC Metadata** | | WARC (Web ARChive) format metadata | |
339
+ | Content Length | `String` | Size of the content | `metadata.warc_metadata.Content-Length` |
340
+ | Content Type | `String` | MIME type of the content | `metadata.warc_metadata.Content-Type` |
341
+ | Block Digest | `String` | Checksum of the WARC block | `metadata.warc_metadata.WARC-Block-Digest` |
342
+ | Concurrent To | `String` | Related WARC records | `metadata.warc_metadata.WARC-Concurrent-To` |
343
+ | Date | `String` | Timestamp of the crawl | `metadata.warc_metadata.WARC-Date` |
344
+ | IP Address | `String` | Source server IP address | `metadata.warc_metadata.WARC-IP-Address` |
345
+ | Payload Type | `String` | Identified content type | `metadata.warc_metadata.WARC-Identified-Payload-Type` |
346
+ | Payload Digest | `String` | Checksum of the payload | `metadata.warc_metadata.WARC-Payload-Digest` |
347
+ | Record ID | `String` | Unique WARC record identifier | `metadata.warc_metadata.WARC-Record-ID` |
348
+ | Target URI | `String` | Original target URL | `metadata.warc_metadata.WARC-Target-URI` |
349
+ | Truncated | `String` | Truncation status | `metadata.warc_metadata.WARC-Truncated` |
350
+ | Type | `String` | WARC record type | `metadata.warc_metadata.WARC-Type` |
351
+ | Warcinfo ID | `String` | Associated warcinfo record | `metadata.warc_metadata.WARC-Warcinfo-ID` |
352
+ | **Additional Info** | | | |
353
+ | WARC Info | `String` | Additional WARC information | `metadata.warc_info` |
354
+
355
+ </details>
356
+
357
+ <details>
358
+ <summary><strong>Quality Signals</strong></summary>
359
+
360
+ The dataset includes two comprehensive quality assessment frameworks:
361
+
362
+ ## Red Pajama v2 Quality Metrics
363
+
364
+ Text quality indicators derived from the Red Pajama v2 filtering pipeline:
365
+
366
+ ### Content Structure Metrics
367
+ | Metric | Description | Path |
368
+ |--------|-------------|------|
369
+ | Original Length | Original document length | `quality_signals.red_pajama_v2.ccnet_original_length` |
370
+ | Original Lines | Number of lines in original document | `quality_signals.red_pajama_v2.ccnet_original_nlines` |
371
+ | Sentence Count | Total sentence count | `quality_signals.red_pajama_v2.rps_doc_num_sentences` |
372
+ | Word Count | Total word count | `quality_signals.red_pajama_v2.rps_doc_word_count` |
373
+ | Mean Word Length | Average word length | `quality_signals.red_pajama_v2.rps_doc_mean_word_length` |
374
+
375
+ ### Language Quality Metrics
376
+ | Metric | Description | Path |
377
+ |--------|-------------|------|
378
+ | Stop Word Fraction | Proportion of stop words | `quality_signals.red_pajama_v2.rps_doc_stop_word_fraction` |
379
+ | Unique Words Fraction | Fraction of unique words | `quality_signals.red_pajama_v2.rps_doc_frac_unique_words` |
380
+ | All Caps Words | Fraction of words in all capitals | `quality_signals.red_pajama_v2.rps_doc_frac_all_caps_words` |
381
+ | Non-Alphabetic Words | Fraction of non-alphabetic words | `quality_signals.red_pajama_v2.rps_doc_frac_no_alph_words` |
382
+ | Unigram Entropy | Entropy measure of word distribution | `quality_signals.red_pajama_v2.rps_doc_unigram_entropy` |
383
+
384
+ ### Content Pattern Analysis
385
+ | Metric | Description | Path |
386
+ |--------|-------------|------|
387
+ | Curly Bracket Density | Curly bracket density (code indicator) | `quality_signals.red_pajama_v2.rps_doc_curly_bracket` |
388
+ | Symbol-to-Word Ratio | Symbol-to-word ratio | `quality_signals.red_pajama_v2.rps_doc_symbol_to_word_ratio` |
389
+ | Ellipsis Line Endings | Lines ending with ellipsis | `quality_signals.red_pajama_v2.rps_doc_frac_lines_end_with_ellipsis` |
390
+ | Lorem Ipsum Detection | Lorem ipsum text detection | `quality_signals.red_pajama_v2.rps_doc_lorem_ipsum` |
391
+ | Offensive Content | Potentially offensive content detection | `quality_signals.red_pajama_v2.rps_doc_ldnoobw_words` |
392
+ | UT1 Blacklist | UT1 blacklist filtering score | `quality_signals.red_pajama_v2.rps_doc_ut1_blacklist` |
393
+
394
+ ### Duplication Detection
395
+ | Metric | Description | Path |
396
+ |--------|-------------|------|
397
+ | 5-gram Duplication | Character-level duplication for 5-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_5grams` |
398
+ | 6-gram Duplication | Character-level duplication for 6-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_6grams` |
399
+ | 7-gram Duplication | Character-level duplication for 7-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_7grams` |
400
+ | 8-gram Duplication | Character-level duplication for 8-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_8grams` |
401
+ | 9-gram Duplication | Character-level duplication for 9-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_9grams` |
402
+ | 10-gram Duplication | Character-level duplication for 10-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_10grams` |
403
+ | Top 2-gram Coverage | Most frequent 2-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_2gram` |
404
+ | Top 3-gram Coverage | Most frequent 3-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_3gram` |
405
+ | Top 4-gram Coverage | Most frequent 4-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_4gram` |
406
+
407
+ ### Domain Importance Scores
408
+ | Metric | Description | Path |
409
+ |--------|-------------|------|
410
+ | Books Importance | Similarity to book content | `quality_signals.red_pajama_v2.rps_doc_books_importance` |
411
+ | Books Importance (Length Corrected) | Length-corrected books similarity | `quality_signals.red_pajama_v2.rps_doc_books_importance_length_correction` |
412
+ | OpenWebText Importance | Similarity to OpenWebText | `quality_signals.red_pajama_v2.rps_doc_openwebtext_importance` |
413
+ | OpenWebText Importance (Length Corrected) | Length-corrected OpenWebText similarity | `quality_signals.red_pajama_v2.rps_doc_openwebtext_importance_length_correction` |
414
+ | Wikipedia Importance | Similarity to Wikipedia | `quality_signals.red_pajama_v2.rps_doc_wikipedia_importance` |
415
+ | Wikipedia Importance (Length Corrected) | Length-corrected Wikipedia similarity | `quality_signals.red_pajama_v2.rps_doc_wikipedia_importance_length_correction` |
416
+
417
+ ## FastText Classification Scores
418
+
419
+ Domain and content type classification probabilities:
420
+
421
+ | Metric | Description | Path |
422
+ |--------|-------------|------|
423
+ | DCLM Score | DataComp-LM classifier score | `quality_signals.fasttext.dclm` |
424
+ | English Confidence | English language confidence | `quality_signals.fasttext.english` |
425
+ | Educational Content | Educational content approximation | `quality_signals.fasttext.fineweb_edu_approx` |
426
+ | General Math | General mathematics content | `quality_signals.fasttext.eai_general_math` |
427
+ | Web Math | OWM Web-based mathematics content | `quality_signals.fasttext.eai_open_web_math` |
428
+ | Code Content | Code content detection | `quality_signals.fasttext.eai_web_code` |
429
+
430
+ </details>
431
+
432
+ ## How to Load the Dataset
433
+
434
+ This section provides examples of how to load the `EssentialAI/essential-web-1t-sample-fdc-partitioned` dataset using different Python libraries and frameworks.
435
+
436
+ ### Using Hugging Face Datasets (Standard Method)
437
+
438
+ The simplest way to load the dataset is using the Hugging Face `datasets` library:
439
+
440
+ ```python
441
+ from datasets import load_dataset
442
+
443
+ # Load the entire dataset
444
+ dataset = load_dataset("EssentialAI/essential-web-1t-sample-fdc-partitioned")
445
+
446
+ # View dataset structure
447
+ print(dataset)
448
+ print(f"Number of examples: {len(dataset['train'])}")
449
+ ```
450
+
451
+ You can also load the dataset in streaming mode to avoid downloading the entire dataset at once:
452
+
453
+ ```python
454
+ from datasets import load_dataset
455
+
456
+ # Load in streaming mode
457
+ dataset = load_dataset("EssentialAI/essential-web-1t-sample-fdc-partitioned", streaming=True)
458
+ data_stream = dataset["train"]
459
+
460
+ # Iterate through examples
461
+ for example in data_stream.take(5):
462
+ print(example)
463
+ ```
464
+
465
+ ### Using PySpark
466
+
467
+ For large-scale distributed processing, you can load the dataset using PySpark with the `pyspark_huggingface` library:
468
+
469
+ ```python
470
+ # First install the required library:
471
+ # pip install pyspark_huggingface
472
+
473
+ import pyspark_huggingface
474
+ from pyspark.sql import SparkSession
475
+
476
+ # Initialize Spark session
477
+ spark = SparkSession.builder.appName("EAI-Taxonomy-Web-1T-Sample-FDC-Partitioned").getOrCreate()
478
+
479
+ # Load the dataset using the "huggingface" data source
480
+ df = spark.read.format("huggingface").load("EssentialAI/essential-web-1t-sample-fdc-partitioned")
481
+
482
+ # Basic dataset exploration
483
+ print(f"Dataset shape: {df.count()} rows, {len(df.columns)} columns")
484
+ df.show(10)
485
+ df.printSchema()
486
+
487
+ # Load only specific columns for efficiency
488
+ df_subset = (
489
+ spark.read.format("huggingface")
490
+ .option("columns", '["column1", "column2"]') # Replace with actual column names
491
+ .load("EssentialAI/essential-web-1t-sample-fdc-partitioned")
492
+ )
493
+
494
+ # Run SQL queries on the dataset
495
+ df.createOrReplaceTempView("eai_web_1t_sample_fdc_partitioned_dataset")
496
+ result = spark.sql("""
497
+ SELECT COUNT(*) as total_examples
498
+ FROM eai_web_1t_sample_fdc_partitioned_dataset
499
+ """)
500
+ result.show()
501
+ ```
502
+
503
+ ### Using Daft
504
+
505
+ Daft provides a modern DataFrame library optimized for machine learning workloads. You can load the dataset directly from Hugging Face:
506
+
507
+ ```python
508
+ import daft
509
+
510
+ # Load the entire dataset
511
+ df = daft.read_parquet("hf://datasets/EssentialAI/essential-web-1t-sample-fdc-partitioned")
512
+
513
+ # Basic exploration
514
+ print("Dataset schema:")
515
+ df.schema()
516
+
517
+ print("First 5 rows:")
518
+ df.show(5)
519
+ ```
520
+
521
+ If you need to access private datasets or use authentication:
522
+
523
+ ```python
524
+ import daft
525
+ from daft.io import IOConfig, HTTPConfig
526
+
527
+ io_config = IOConfig(http=HTTPConfig(bearer_token="your_token"))
528
+ df = daft.read_parquet("hf://datasets/EssentialAI/essential-web-1t-sample-fdc-partitioned", io_config=io_config)
529
+ ```
530
+
531
+ ### Installation Requirements
532
+
533
+ Make sure you have the required libraries installed:
534
+
535
+ ```bash
536
+ # For Hugging Face datasets
537
+ pip install datasets
538
+
539
+ # For PySpark with Hugging Face integration
540
+ pip install pyspark_huggingface
541
+
542
+ # For Daft
543
+ pip install daft
544
+ ```
545
+
546
+ ## 🎓 Citation
547
+
548
+ If you use this dataset, please cite our EssentialWeb paper:
549
+
550
+ ```bibtex
551
+ @article{essentialweb2025,
552
+ title={Essential-Web: 24T tokens of organized web data},
553
+ author={[Authors]},
554
+ year={2025}
555
+ }
556
+ ```
fdc_level2=02/train-00001-of-00437.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94e3a99a65032c2271527c363f7f4bb3f5fee36e9b353164c3d7d706350eebee
3
+ size 86146814
fdc_level2=02/train-00010-of-00437.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6385b483a2ec39c20bec2cf1870f641bbcb634213b0e233ce6c4633dec279346
3
+ size 85885558
fdc_level2=02/train-00047-of-00437.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f485217dacdfef1f0642ddd79b3890e8d14567116f07bdf604ad834a540ddd3
3
+ size 86782583
fdc_level2=02/train-00142-of-00437.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6666b2d8e047c72a2d2a54a24cb1cb1d804f8e5c09dadc6eb9582376073aff17
3
+ size 88711390
fdc_level2=02/train-00250-of-00437.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8a65010ca71875324762e89722db3c3bcaeca02a867018d62283ec3b3d9d321
3
+ size 86304874
fdc_level2=02/train-00352-of-00437.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f945184f3e984196cb04cd5d22633e99f47973c1b9a08426f37ba75ef4d9429
3
+ size 88032715
fdc_level2=02/train-00436-of-00437.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ce2bfdeb9430a616ef9ac88d8c7a0e539a611b99e1bd35aeecf40055aee725d
3
+ size 87585336
fdc_level2=05/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9faf15e9ea313caf0d2463f720f983a462c10a12881a2ea19854870746912aa5
3
+ size 2068122
fdc_level2=06/train-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a23286ebbadcd0724b1699dab33b0c811952e1eaa339eb66c3f5a545f21cdbf7
3
+ size 113209772
fdc_level2=06/train-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:049584f9b58a20b32c965c77e0ac6264a75478c9200a0285e5de3014326f87ba
3
+ size 114308204
fdc_level2=07/train-00001-of-00095.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cd6a5463b1a474fdd3e40a0a5370e74ae3a76bc4cc4ff3538042898888bd351
3
+ size 125753422
fdc_level2=07/train-00049-of-00095.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82a212b6ee7f6f0d257dcda6511f3cecd212eb4c5056f1922506556c98d7e329
3
+ size 117706017
fdc_level2=07/train-00094-of-00095.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3df420eaac9e8edf654621bb59aa41b53fb4a53495a8ca834687e937786d6800
3
+ size 122414597
fdc_level2=08/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d181a6195ad72cb4d69fbd524eadd1ec1452329c98d1fe17ba4e78d23b6ab214
3
+ size 180931
fdc_level2=09/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9818bd1d4e6e35baf171e3fb06a25cf263b5d6ed93261c2b1a8e4f7b42dcd8c7
3
+ size 103380
fdc_level2=11/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31481c2ce82400b20a84c5bc548ca6885a2ecca4d435ebc6550c7fba6b8ffc41
3
+ size 18727376
fdc_level2=13/train-00002-of-00184.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4ee2bbe4fab2d52f9820d7e0cd8059dbb26726c1544f0a452275f1a3bd39164
3
+ size 129213982
fdc_level2=13/train-00051-of-00184.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c033e64a0874869fb46a1f96a39c9f9c6d67b0468432b7ea57276eb9ea8261e
3
+ size 17515752
fdc_level2=13/train-00182-of-00184.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9c337fe601c07137cc91c6711982d512a64b959c3cbf1294b245d2b0026cfea
3
+ size 129751012
fdc_level2=14/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b892ea16e3f7f12b71beff4b7074ace419b0eb5fd21a4bad155691beadcdaf2
3
+ size 246316
fdc_level2=15/train-00001-of-00213.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3894b2fe2a735cd6a6f2feb5bc50f6effaec05d7eb41e7542c8155796b05075a
3
+ size 126726045
fdc_level2=15/train-00049-of-00213.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e95a96fd4297b2b1f50f6c712c09f8f7fc0e0936017e75b714632c99291657cd
3
+ size 125290358
fdc_level2=15/train-00149-of-00213.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49f0be3129035422f705781016cc8d39fb8849b33112d3ca1a2b70e1d6d39e89
3
+ size 125885861
fdc_level2=16/train-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c71294373b81bf3c47c5a042a254cdd108c32ca967fcfc494622d7e39f06d6e
3
+ size 80643592
fdc_level2=16/train-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e3dc9f3ca5f7e912e9ace8a39e608602d39c05bfd098e46903ad4296a3acb7e
3
+ size 85928634
fdc_level2=17/train-00002-of-00028.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc84bd7830e10306fc5a2b3ef2c7a891421eef3ec2ecf2e975ae8e79ecb09d2e
3
+ size 130349011
fdc_level2=17/train-00026-of-00028.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dafe7753f1a669e36297a3ade3ffad4b1847c1899277d9da5fe6e2857655824a
3
+ size 131128256
fdc_level2=18/train-00000-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d3df2494085662aa735b6be0b273932ea7648277b60d913e09b08ebd55f4042
3
+ size 128759155
fdc_level2=18/train-00001-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a75140a09328d2015e54e17528f65213f779cf4bcae3ac999955b5c33a9bd817
3
+ size 125390296
fdc_level2=18/train-00002-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7dfa8bcce171193c536c8591387bd36db94c3318ff1877bdd3fa04250a8578a
3
+ size 121769996
fdc_level2=19/train-00002-of-00014.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3e4a6e23b8504d13609e0251b3588e2c1892e356eff8230cd028cec515b2e99
3
+ size 134134097
fdc_level2=19/train-00006-of-00014.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b603de687553629a9b9de49be23e45b75cf0db9e2a501dc1f681730ea1f07cf7
3
+ size 133688006
fdc_level2=19/train-00012-of-00014.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07d453dae5c083b00078087752a899276f5743ce93366c501e3dbdf928f52ed4
3
+ size 133584523
fdc_level2=21/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9735d9bb797c2b248cdfb6798a566b6f274d8de261cc78641d877bcabe321cbe
3
+ size 18293603
fdc_level2=22/train-00002-of-00158.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d732e719449f89b200dd254c38294dc803daa6a36f358584c8a22cd2cd884fa9
3
+ size 129092353
fdc_level2=22/train-00049-of-00158.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91da6c8351f305f9865db3ecd0b10a9f66a0e362280de0819f9f52168663953a
3
+ size 129699394
fdc_level2=22/train-00149-of-00158.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4b2c6c3c9f3113f799ecc5c69ebe6b0e1d95d66d08d512a6f5aab81f3985684
3
+ size 129677674
fdc_level2=23/train-00002-of-00176.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7610d3615a6de7470a22b039aa239be61f8f3eb6265c96624db0792ab04c9b4b
3
+ size 132430232
fdc_level2=23/train-00048-of-00176.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2107fada478037c60892f86b97a23ddc1e83015e5da696302808a8eaed02e92
3
+ size 132563549
fdc_level2=23/train-00148-of-00176.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f31869a2ee55951903c869a0bfffe884136411736e2229c3d5bf6cd2e1cf90c
3
+ size 132082744
fdc_level2=24/train-00003-of-00152.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47a9e37c8586276c0a510f4ac00520bcf287a005bb538a96d59171be8fc30e86
3
+ size 115040011
fdc_level2=24/train-00049-of-00152.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:641737db140ae8da409a8970b75d0fe6f4b09573f2576180dece93940421183f
3
+ size 116999894
fdc_level2=24/train-00149-of-00152.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:acbdbb6b5a2b0483b744397cf3cbff6af47ad5f4cb80eff5d1ce1dacf8fb2798
3
+ size 113122233
fdc_level2=25/train-00001-of-00013.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8388361e6b81213327169a0037d7eff411af73fa1a27e22dd0ebbc45b080ce5e
3
+ size 69224348
fdc_level2=25/train-00007-of-00013.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8c66958bd8db37634576dc7c7d395fdc562ad4e05ca710ecd69c0dca41da3b8
3
+ size 68835294
fdc_level2=25/train-00012-of-00013.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:775e2e60def66bb183d3b8b8e5ef78c956d0f46b3525d83806c508b48f19174c
3
+ size 69754759
fdc_level2=26/train-00002-of-00064.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9260eae6e0f9d4a7c4ccbdfeca28b65154ed70916be4f4908bae5cf11ca7cc40
3
+ size 129209084
fdc_level2=26/train-00048-of-00064.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88e08c97cc7f9e9e919b3e46810892ec1d64d49ef4ddeba97be6ffd08a2b96d1
3
+ size 129924088
fdc_level2=26/train-00062-of-00064.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e5b206739be80608f9d77da48c9ac7d467dcb63b407cb07da75c120d5ef7777
3
+ size 130505205