Datasets:
Search is not available for this dataset
tokens
list |
|---|
[50256,464,13362,12091,198,1890,477,262,1842,11,19661,290,10731,287,12091,2517,268,447,247,82,3835,1(...TRUNCATED)
|
[326,714,4886,262,366,34167,1,2727,416,257,2060,22190,81,2879,41456,284,262,3668,338,4755,1377,16019(...TRUNCATED)
|
[82,389,2323,866,290,12070,47878,11,13011,257,11710,286,3056,290,8122,13,198,11041,287,10437,27757,1(...TRUNCATED)
|
[12,23419,16242,19270,286,262,33089,709,504,1906,272,48584,1448,286,40435,3488,259,1040,960,10414,87(...TRUNCATED)
|
[324,314,29346,16283,558,11,284,262,24821,286,46369,11,287,1511,5333,13,679,3888,465,3139,284,1215,8(...TRUNCATED)
|
[314,869,564,250,22342,278,47570,447,251,13,357,16371,7325,11,314,760,8,770,318,257,16007,326,314,19(...TRUNCATED)
|
[7323,262,976,16439,356,481,779,27949,2189,500,355,340,318,517,14704,1695,13,357,23,8590,720,16,8,19(...TRUNCATED)
|
[5264,14345,1845,337,3372,5757,10134,416,281,6439,23128,373,7044,379,614,338,886,13,198,27100,654,41(...TRUNCATED)
|
[284,262,4427,416,4441,257,2272,355,4950,355,340,318,10345,13,383,18643,318,19254,319,644,318,1444,5(...TRUNCATED)
|
[4928,1448,287,8037,290,12864,362,286,663,2242,1866,13,383,20674,2716,262,16997,46522,618,22372,8072(...TRUNCATED)
|
End of preview. Expand
in Data Studio
Dataset card for FineWeb-Edu 10B tokenized dataset
This dataset contains tokenized texts from FineWeb-Edu sample-10B HuggingFaceFW/fineweb-edu. The data was tokenized using the OpenAI's tiktoken tokenizer, and structured for efficient streaming and distributed (DDP) training.
Structure
The dataset follows Hugging Face’s recommended structure for efficient streaming in multi-GPU environments. It consists of two splits, where each split contains a number of shards that is a factor of 8 — allowing for efficient distribution of shards across GPU nodes via datasets.distributed.split_dataset_by_node(). Dataset configuration:
train:
- num shards: 128
- num rows per shard: 295
- num tokensper row: 262.145 (2048x128+1)
validation:
- num shards: 8
- num rows per shard: 26
- num tokens per row: same as train
How to use
Streaming on single GPU:
from datasets import load_dataset
dataset = load_dataset("nikolina-p/fineweb_10BT_tokenized", split="train", streaming=True)
stream = iter(dataset)
tokens = next(stream)["tokens"]
Streaming in a multi-GPU environment:
import os
from datasets import load_dataset
from datasets.distributed import split_dataset_by_node
dataset = load_dataset("nikolina-p/fineweb_10BT_tokenized", split="train", streaming=True)
dataset = split_dataset_by_node(
dataset,
rank=int(os.environ["RANK"]),
world_size=int(os.environ["WORLD_SIZE"])
)
stream = iter(dataset)
tokens = next(stream)["tokens"]
- Downloads last month
- 106