Datasets:
datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-25 12:14:19
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-25 10:08:31
| trending_score
float64 0
64
⌀ | card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MisDrifter/eval_8B_base_on_train_armo | MisDrifter | 2025-06-25T02:22:34Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-25T02:22:33Z | null | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: response_0
dtype: string
- name: response_0_reward
dtype: float64
splits:
- name: train
num_bytes: 50189037
num_examples: 20000
download_size: 28411479
dataset_size: 50189037
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
momo1942/x_dataset_33945 | momo1942 | 2025-06-24T19:41:31Z | 728 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T23:48:53Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** momo1942/x_dataset_33945
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5Eexfw8PQvNYvtG66oKZWsZLGcbjF2K6cGEtKarqSPn7cajP
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{momo19422025datauniversex_dataset_33945,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={momo1942},
year={2025},
url={https://huggingface.co/datasets/momo1942/x_dataset_33945},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 49625357
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-12T00:00:00Z
- **Last Updated:** 2025-02-18T17:21:47Z
### Data Distribution
- Tweets with hashtags: 37.72%
- Tweets without hashtags: 62.28%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 30908871 | 62.28% |
| 2 | #riyadh | 297876 | 0.60% |
| 3 | #zelena | 217496 | 0.44% |
| 4 | #tiktok | 180979 | 0.36% |
| 5 | #bbb25 | 109126 | 0.22% |
| 6 | #ad | 104845 | 0.21% |
| 7 | #jhope_at_galadespiècesjaunes | 95373 | 0.19% |
| 8 | #superbowl | 86641 | 0.17% |
| 9 | #bbmzansi | 60387 | 0.12% |
| 10 | #pr | 56901 | 0.11% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T23:49:42Z | 3195659 | 3195659 |
| 2025-01-30T11:52:49Z | 9861630 | 13057289 |
| 2025-02-02T23:56:06Z | 11110169 | 24167458 |
| 2025-02-06T11:58:49Z | 7448788 | 31616246 |
| 2025-02-10T00:01:43Z | 7309603 | 38925849 |
| 2025-02-17T02:22:24Z | 9376101 | 48301950 |
| 2025-02-18T02:20:39Z | 689645 | 48991595 |
| 2025-02-18T17:21:47Z | 633762 | 49625357 |
|
Voxel51/ARCADE_FO | Voxel51 | 2025-06-24T19:06:42Z | 0 | 0 | [
"task_categories:object-detection",
"language:en",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"library:fiftyone",
"region:us",
"fiftyone",
"image",
"object-detection"
] | [
"object-detection"
] | 2025-06-24T18:02:52Z | null | ---
annotations_creators: []
language: en
size_categories:
- 1K<n<10K
task_categories:
- object-detection
task_ids: []
pretty_name: arcade_combined_export
tags:
- fiftyone
- image
- object-detection
dataset_summary: '
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 3000 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = load_from_hub("pjramg/arcade_fiftyone")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for arcade_combined_export
<!-- Provide a quick summary of the dataset. -->
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 3000 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("pjramg/arcade_fiftyone")
# Launch the App
session = fo.launch_app(dataset)
```
# ARCADE Combined Dataset (FiftyOne Format)
The **ARCADE Combined Dataset** is a curated collection of coronary angiography images and annotations designed to evaluate coronary artery stenosis. This version has been processed and exported using [FiftyOne](https://voxel51.com/fiftyone), and includes cleaned segmentation data, metadata fields for clinical context, and embedded visual labels.
## Dataset Structure
- `segmentations`: COCO-style detection masks per coronary artery segment.
- `phase`: The acquisition phase of the angiography video.
- `task`: A specific labeling task (segmentation or regression) is used.
- `subset_name`: Subdivision info (train, val, test).
- `coco_id`: Corresponding COCO ID for alignment with original sources.
- `filepath`: Path to the image file.
- `metadata`: Image metadata including dimensions and pixel spacing.
## Format
This dataset is stored in **FiftyOneDataset format**, which consists of:
- `data.json`: Metadata and label references
- `data/`: Folder containing all image samples
- Optional: auxiliary files (e.g., `README.md`, config, JSON index)
To load it in Python:
```python
import fiftyone as fo
dataset = fo.Dataset.from_dir(
dataset_dir="arcade_combined_fiftyone",
dataset_type=fo.types.FiftyOneDataset,
)
```
## Source
The original ARCADE dataset was introduced in the paper:
Labrecque Langlais et al. (2023) — Evaluation of Stenoses Using AI Video Models Applied to Coronary Angiographies.
https://doi.org/10.21203/rs.3.rs-3610879/v1
This combined version aggregates and restructures subsets across tasks and phases, harmonized with FiftyOne tooling for streamlined model training and evaluation.
## License
This dataset is shared for research and academic use only. Please consult the original dataset license for clinical or commercial applications.
## Citation
```bibtex
@article{avram2023evaluation,
title={Evaluation of Stenoses Using AI Video Models Applied to Coronary Angiographies},
author={Labrecque Langlais, E. and Corbin, D. and Tastet, O. and Hayek, A. and Doolub, G. and Mrad, S. and Tardif, J.-C. and Tanguay, J.-F. and Marquis-Gravel, G. and Tison, G. and Kadoury, S. and Le, W. and Gallo, R. and Lesage, F. and Avram, R.},
year={2023}
}
```
## Dataset Card Contact
[Paula Ramos](https://huggingface.co/datasets/pjramg) |
Zhitao-He/MATPBench | Zhitao-He | 2025-06-24T16:36:55Z | 127 | 3 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-16T04:21:03Z | null | ---
license: apache-2.0
---
|
graliuce/MedMCQA.24.01 | graliuce | 2025-06-24T16:31:07Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T16:25:42Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: suffix
dtype: string
splits:
- name: train
num_bytes: 4582059
num_examples: 4780
- name: test
num_bytes: 96564
num_examples: 100
download_size: 803311
dataset_size: 4678623
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
hshwk1983/x_dataset_2983 | hshwk1983 | 2025-06-24T16:09:19Z | 1,122 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-27T07:01:04Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** hshwk1983/x_dataset_2983
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5F2RCkLaXEwdz4PALA5iwSBQQ4rWEAioaniBHouRyhUSYjne
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{hshwk19832025datauniversex_dataset_2983,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={hshwk1983},
year={2025},
url={https://huggingface.co/datasets/hshwk1983/x_dataset_2983},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 45361062
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-10T00:00:00Z
- **Last Updated:** 2025-02-18T19:56:18Z
### Data Distribution
- Tweets with hashtags: 49.23%
- Tweets without hashtags: 50.77%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 23030513 | 50.77% |
| 2 | #riyadh | 389599 | 0.86% |
| 3 | #zelena | 270010 | 0.60% |
| 4 | #tiktok | 216661 | 0.48% |
| 5 | #ad | 127153 | 0.28% |
| 6 | #bbb25 | 124236 | 0.27% |
| 7 | #jhope_at_galadespiècesjaunes | 107356 | 0.24% |
| 8 | #bbmzansi | 73192 | 0.16% |
| 9 | #granhermano | 70611 | 0.16% |
| 10 | #trump | 67914 | 0.15% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T07:02:01Z | 2802800 | 2802800 |
| 2025-01-30T19:05:56Z | 9696380 | 12499180 |
| 2025-02-03T07:09:45Z | 10920384 | 23419564 |
| 2025-02-06T19:12:26Z | 6138868 | 29558432 |
| 2025-02-10T07:16:07Z | 8261798 | 37820230 |
| 2025-02-13T19:19:34Z | 6252880 | 44073110 |
| 2025-02-18T04:54:59Z | 640422 | 44713532 |
| 2025-02-18T19:56:18Z | 647530 | 45361062 |
|
littleGuagua/x_dataset_24747 | littleGuagua | 2025-06-24T16:00:50Z | 1,142 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T08:49:30Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** littleGuagua/x_dataset_24747
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5EM4mwdfwdBzEbEqJ9KsFnj2sKpAjywcb5Ddz3CEoKV2ksj1
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{littleGuagua2025datauniversex_dataset_24747,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={littleGuagua},
year={2025},
url={https://huggingface.co/datasets/littleGuagua/x_dataset_24747},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 157467919
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-13T00:00:00Z
- **Last Updated:** 2025-02-18T16:32:12Z
### Data Distribution
- Tweets with hashtags: 42.71%
- Tweets without hashtags: 57.29%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 90209693 | 57.29% |
| 2 | #riyadh | 1088786 | 0.69% |
| 3 | #zelena | 820088 | 0.52% |
| 4 | #tiktok | 653763 | 0.42% |
| 5 | #bbb25 | 394331 | 0.25% |
| 6 | #ad | 378659 | 0.24% |
| 7 | #jhope_at_galadespiècesjaunes | 234371 | 0.15% |
| 8 | #bbmzansi | 213586 | 0.14% |
| 9 | #pr | 203109 | 0.13% |
| 10 | #yahooニュース | 190885 | 0.12% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T08:50:16Z | 2482006 | 2482006 |
| 2025-01-29T21:00:47Z | 29908448 | 32390454 |
| 2025-02-02T09:11:30Z | 28938392 | 61328846 |
| 2025-02-05T21:23:51Z | 29767835 | 91096681 |
| 2025-02-09T09:36:47Z | 29027751 | 120124432 |
| 2025-02-12T21:54:03Z | 28620241 | 148744673 |
| 2025-02-16T09:45:11Z | 7404661 | 156149334 |
| 2025-02-18T00:09:45Z | 696224 | 156845558 |
| 2025-02-18T16:32:12Z | 622361 | 157467919 |
|
hshwk1983/x_dataset_27221 | hshwk1983 | 2025-06-24T15:59:18Z | 1,245 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-27T01:57:54Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** hshwk1983/x_dataset_27221
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5HozLaXwAyioW1oEwf6zAysEyyGXcCifVwCeYiz6SKvSrm52
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{hshwk19832025datauniversex_dataset_27221,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={hshwk1983},
year={2025},
url={https://huggingface.co/datasets/hshwk1983/x_dataset_27221},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 37381424
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-11T00:00:00Z
- **Last Updated:** 2025-02-18T19:01:56Z
### Data Distribution
- Tweets with hashtags: 29.18%
- Tweets without hashtags: 70.82%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 26474119 | 70.82% |
| 2 | #riyadh | 165345 | 0.44% |
| 3 | #zelena | 147137 | 0.39% |
| 4 | #tiktok | 108254 | 0.29% |
| 5 | #jhope_at_galadespiècesjaunes | 96559 | 0.26% |
| 6 | #ad | 65237 | 0.17% |
| 7 | #bbb25 | 63704 | 0.17% |
| 8 | #royalrumble | 45208 | 0.12% |
| 9 | #precure | 44979 | 0.12% |
| 10 | #bbmzansi | 41847 | 0.11% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T01:58:44Z | 3242408 | 3242408 |
| 2025-01-30T14:08:14Z | 6911604 | 10154012 |
| 2025-02-03T02:11:35Z | 9565243 | 19719255 |
| 2025-02-06T14:13:40Z | 5208295 | 24927550 |
| 2025-02-10T02:18:00Z | 8468886 | 33396436 |
| 2025-02-13T14:19:50Z | 2518336 | 35914772 |
| 2025-02-18T04:01:08Z | 807421 | 36722193 |
| 2025-02-18T19:01:56Z | 659231 | 37381424 |
|
arushisinha98/worldbank_dataset | arushisinha98 | 2025-06-24T15:22:31Z | 41 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-20T16:44:40Z | null | ---
dataset_info:
features:
- name: BG.GSR.NFSV.GD.ZS
dtype: float64
- name: BM.GSR.CMCP.ZS
dtype: float64
- name: BM.GSR.FCTY.CD
dtype: float64
- name: BM.GSR.GNFS.CD
dtype: float64
- name: BM.GSR.INSF.ZS
dtype: float64
- name: BM.GSR.MRCH.CD
dtype: float64
- name: BM.GSR.NFSV.CD
dtype: float64
- name: BM.GSR.ROYL.CD
dtype: float64
- name: BM.GSR.TOTL.CD
dtype: float64
- name: BM.GSR.TRAN.ZS
dtype: float64
- name: BM.GSR.TRVL.ZS
dtype: float64
- name: BM.KLT.DINV.CD.WD
dtype: float64
- name: BM.KLT.DINV.WD.GD.ZS
dtype: float64
- name: BM.TRF.PRVT.CD
dtype: float64
- name: BM.TRF.PWKR.CD.DT
dtype: float64
- name: BN.CAB.XOKA.CD
dtype: float64
- name: BN.CAB.XOKA.GD.ZS
dtype: float64
- name: BN.FIN.TOTL.CD
dtype: float64
- name: BN.GSR.FCTY.CD
dtype: float64
- name: BN.GSR.GNFS.CD
dtype: float64
- name: BN.GSR.MRCH.CD
dtype: float64
- name: BN.KAC.EOMS.CD
dtype: float64
- name: BN.KLT.DINV.CD
dtype: float64
- name: BN.KLT.PTXL.CD
dtype: float64
- name: BN.RES.INCL.CD
dtype: float64
- name: BN.TRF.CURR.CD
dtype: float64
- name: BN.TRF.KOGT.CD
dtype: float64
- name: BX.GRT.EXTA.CD.WD
dtype: float64
- name: BX.GRT.TECH.CD.WD
dtype: float64
- name: BX.GSR.CCIS.CD
dtype: float64
- name: BX.GSR.CCIS.ZS
dtype: float64
- name: BX.GSR.CMCP.ZS
dtype: float64
- name: BX.GSR.FCTY.CD
dtype: float64
- name: BX.GSR.GNFS.CD
dtype: float64
- name: BX.GSR.INSF.ZS
dtype: float64
- name: BX.GSR.MRCH.CD
dtype: float64
- name: BX.GSR.NFSV.CD
dtype: float64
- name: BX.GSR.ROYL.CD
dtype: float64
- name: BX.GSR.TOTL.CD
dtype: float64
- name: BX.GSR.TRAN.ZS
dtype: float64
- name: BX.GSR.TRVL.ZS
dtype: float64
- name: BX.KLT.DINV.CD.WD
dtype: float64
- name: BX.KLT.DINV.WD.GD.ZS
dtype: float64
- name: BX.PEF.TOTL.CD.WD
dtype: float64
- name: BX.TRF.CURR.CD
dtype: float64
- name: BX.TRF.PWKR.CD
dtype: float64
- name: BX.TRF.PWKR.CD.DT
dtype: float64
- name: BX.TRF.PWKR.DT.GD.ZS
dtype: float64
- name: CM.MKT.INDX.ZG
dtype: float64
- name: CM.MKT.LCAP.CD
dtype: float64
- name: CM.MKT.LCAP.GD.ZS
dtype: float64
- name: CM.MKT.LDOM.NO
dtype: float64
- name: CM.MKT.TRAD.CD
dtype: float64
- name: CM.MKT.TRAD.GD.ZS
dtype: float64
- name: CM.MKT.TRNR
dtype: float64
- name: DT.DOD.DECT.CD
dtype: float64
- name: DT.DOD.DECT.GN.ZS
dtype: float64
- name: DT.DOD.DIMF.CD
dtype: float64
- name: DT.DOD.DLXF.CD
dtype: float64
- name: DT.DOD.DPNG.CD
dtype: float64
- name: DT.DOD.DPPG.CD
dtype: float64
- name: DT.DOD.DSTC.CD
dtype: float64
- name: DT.DOD.DSTC.IR.ZS
dtype: float64
- name: DT.DOD.DSTC.XP.ZS
dtype: float64
- name: DT.DOD.DSTC.ZS
dtype: float64
- name: DT.DOD.MIBR.CD
dtype: float64
- name: DT.DOD.MIDA.CD
dtype: float64
- name: DT.DOD.MWBG.CD
dtype: float64
- name: DT.DOD.PVLX.CD
dtype: float64
- name: DT.DOD.PVLX.EX.ZS
dtype: float64
- name: DT.DOD.PVLX.GN.ZS
dtype: float64
- name: DT.NFL.BLAT.CD
dtype: float64
- name: DT.NFL.BOND.CD
dtype: float64
- name: DT.NFL.DPNG.CD
dtype: float64
- name: DT.NFL.IMFC.CD
dtype: float64
- name: DT.NFL.IMFN.CD
dtype: float64
- name: DT.NFL.MIBR.CD
dtype: float64
- name: DT.NFL.MIDA.CD
dtype: float64
- name: DT.NFL.MLAT.CD
dtype: float64
- name: DT.NFL.MOTH.CD
dtype: float64
- name: DT.NFL.NIFC.CD
dtype: float64
- name: DT.NFL.OFFT.CD
dtype: float64
- name: DT.NFL.PBND.CD
dtype: float64
- name: DT.NFL.PCBK.CD
dtype: float64
- name: DT.NFL.PCBO.CD
dtype: float64
- name: DT.NFL.PNGB.CD
dtype: float64
- name: DT.NFL.PNGC.CD
dtype: float64
- name: DT.NFL.PROP.CD
dtype: float64
- name: DT.NFL.PRVT.CD
dtype: float64
- name: DT.NFL.RDBC.CD
dtype: float64
- name: DT.NFL.RDBN.CD
dtype: float64
- name: DT.ODA.ODAT.CD
dtype: float64
- name: DT.ODA.ODAT.GN.ZS
dtype: float64
- name: DT.ODA.ODAT.PC.ZS
dtype: float64
- name: DT.TDS.DECT.CD
dtype: float64
- name: DT.TDS.DECT.EX.ZS
dtype: float64
- name: DT.TDS.DECT.GN.ZS
dtype: float64
- name: DT.TDS.DIMF.CD
dtype: float64
- name: DT.TDS.DPPF.XP.ZS
dtype: float64
- name: DT.TDS.DPPG.CD
dtype: float64
- name: DT.TDS.DPPG.GN.ZS
dtype: float64
- name: DT.TDS.DPPG.XP.ZS
dtype: float64
- name: DT.TDS.MLAT.CD
dtype: float64
- name: DT.TDS.MLAT.PG.ZS
dtype: float64
- name: FB.AST.NPER.ZS
dtype: float64
- name: FB.ATM.TOTL.P5
dtype: float64
- name: FB.BNK.CAPA.ZS
dtype: float64
- name: FB.CBK.BRCH.P5
dtype: float64
- name: FB.CBK.BRWR.P3
dtype: float64
- name: FB.CBK.DPTR.P3
dtype: float64
- name: FD.AST.PRVT.GD.ZS
dtype: float64
- name: FD.RES.LIQU.AS.ZS
dtype: float64
- name: FI.RES.TOTL.CD
dtype: float64
- name: FI.RES.TOTL.DT.ZS
dtype: float64
- name: FI.RES.TOTL.MO
dtype: float64
- name: FI.RES.XGLD.CD
dtype: float64
- name: FM.AST.CGOV.ZG.M3
dtype: float64
- name: FM.AST.DOMO.ZG.M3
dtype: float64
- name: FM.AST.DOMS.CN
dtype: float64
- name: FM.AST.NFRG.CN
dtype: float64
- name: FM.AST.PRVT.GD.ZS
dtype: float64
- name: FM.AST.PRVT.ZG.M3
dtype: float64
- name: FM.LBL.BMNY.CN
dtype: float64
- name: FM.LBL.BMNY.GD.ZS
dtype: float64
- name: FM.LBL.BMNY.IR.ZS
dtype: float64
- name: FM.LBL.BMNY.ZG
dtype: float64
- name: FP.CPI.TOTL
dtype: float64
- name: FP.CPI.TOTL.ZG
dtype: float64
- name: FP.WPI.TOTL
dtype: float64
- name: FR.INR.DPST
dtype: float64
- name: FR.INR.LEND
dtype: float64
- name: FR.INR.LNDP
dtype: float64
- name: FR.INR.RINR
dtype: float64
- name: FR.INR.RISK
dtype: float64
- name: FS.AST.CGOV.GD.ZS
dtype: float64
- name: FS.AST.DOMO.GD.ZS
dtype: float64
- name: FS.AST.DOMS.GD.ZS
dtype: float64
- name: FS.AST.PRVT.GD.ZS
dtype: float64
- name: FX.OWN.TOTL.40.ZS
dtype: float64
- name: FX.OWN.TOTL.60.ZS
dtype: float64
- name: FX.OWN.TOTL.FE.ZS
dtype: float64
- name: FX.OWN.TOTL.MA.ZS
dtype: float64
- name: FX.OWN.TOTL.OL.ZS
dtype: float64
- name: FX.OWN.TOTL.PL.ZS
dtype: float64
- name: FX.OWN.TOTL.SO.ZS
dtype: float64
- name: FX.OWN.TOTL.YG.ZS
dtype: float64
- name: FX.OWN.TOTL.ZS
dtype: float64
- name: GC.DOD.TOTL.GD.ZS
dtype: float64
- name: GC.REV.XGRT.GD.ZS
dtype: float64
- name: GC.XPN.TOTL.GD.ZS
dtype: float64
- name: NE.CON.GOVT.CD
dtype: float64
- name: NE.CON.GOVT.CN
dtype: float64
- name: NE.CON.GOVT.KD
dtype: float64
- name: NE.CON.GOVT.KD.ZG
dtype: float64
- name: NE.CON.GOVT.KN
dtype: float64
- name: NE.CON.GOVT.ZS
dtype: float64
- name: NE.CON.PRVT.CD
dtype: float64
- name: NE.CON.PRVT.CN
dtype: float64
- name: NE.CON.PRVT.CN.AD
dtype: float64
- name: NE.CON.PRVT.KD
dtype: float64
- name: NE.CON.PRVT.KD.ZG
dtype: float64
- name: NE.CON.PRVT.KN
dtype: float64
- name: NE.CON.PRVT.PC.KD
dtype: float64
- name: NE.CON.PRVT.PC.KD.ZG
dtype: float64
- name: NE.CON.PRVT.PP.CD
dtype: float64
- name: NE.CON.PRVT.PP.KD
dtype: float64
- name: NE.CON.PRVT.ZS
dtype: float64
- name: NE.CON.TOTL.CD
dtype: float64
- name: NE.CON.TOTL.CN
dtype: float64
- name: NE.CON.TOTL.KD
dtype: float64
- name: NE.CON.TOTL.KD.ZG
dtype: float64
- name: NE.CON.TOTL.KN
dtype: float64
- name: NE.CON.TOTL.ZS
dtype: float64
- name: NE.DAB.DEFL.ZS
dtype: float64
- name: NE.DAB.TOTL.CD
dtype: float64
- name: NE.DAB.TOTL.CN
dtype: float64
- name: NE.DAB.TOTL.KD
dtype: float64
- name: NE.DAB.TOTL.KN
dtype: float64
- name: NE.DAB.TOTL.ZS
dtype: float64
- name: NE.EXP.GNFS.CD
dtype: float64
- name: NE.EXP.GNFS.CN
dtype: float64
- name: NE.EXP.GNFS.KD
dtype: float64
- name: NE.EXP.GNFS.KD.ZG
dtype: float64
- name: NE.EXP.GNFS.KN
dtype: float64
- name: NE.EXP.GNFS.ZS
dtype: float64
- name: NE.GDI.FPRV.CN
dtype: float64
- name: NE.GDI.FPRV.ZS
dtype: float64
- name: NE.GDI.FTOT.CD
dtype: float64
- name: NE.GDI.FTOT.CN
dtype: float64
- name: NE.GDI.FTOT.KD
dtype: float64
- name: NE.GDI.FTOT.KD.ZG
dtype: float64
- name: NE.GDI.FTOT.KN
dtype: float64
- name: NE.GDI.FTOT.ZS
dtype: float64
- name: NE.GDI.STKB.CD
dtype: float64
- name: NE.GDI.STKB.CN
dtype: float64
- name: NE.GDI.STKB.KN
dtype: float64
- name: NE.GDI.TOTL.CD
dtype: float64
- name: NE.GDI.TOTL.CN
dtype: float64
- name: NE.GDI.TOTL.KD
dtype: float64
- name: NE.GDI.TOTL.KD.ZG
dtype: float64
- name: NE.GDI.TOTL.KN
dtype: float64
- name: NE.GDI.TOTL.ZS
dtype: float64
- name: NE.IMP.GNFS.CD
dtype: float64
- name: NE.IMP.GNFS.CN
dtype: float64
- name: NE.IMP.GNFS.KD
dtype: float64
- name: NE.IMP.GNFS.KD.ZG
dtype: float64
- name: NE.IMP.GNFS.KN
dtype: float64
- name: NE.IMP.GNFS.ZS
dtype: float64
- name: NE.RSB.GNFS.CD
dtype: float64
- name: NE.RSB.GNFS.ZS
dtype: float64
- name: NE.TRD.GNFS.ZS
dtype: float64
- name: NV.AGR.EMPL.KD
dtype: float64
- name: NV.AGR.TOTL.CN
dtype: float64
- name: NV.AGR.TOTL.KD
dtype: float64
- name: NV.AGR.TOTL.KD.ZG
dtype: float64
- name: NV.AGR.TOTL.KN
dtype: float64
- name: NV.FSM.TOTL.CN
dtype: float64
- name: NV.FSM.TOTL.KN
dtype: float64
- name: NV.IND.EMPL.KD
dtype: float64
- name: NV.IND.MANF.CD
dtype: float64
- name: NV.IND.MANF.CN
dtype: float64
- name: NV.IND.MANF.KD
dtype: float64
- name: NV.IND.MANF.KD.ZG
dtype: float64
- name: NV.IND.MANF.KN
dtype: float64
- name: NV.IND.MANF.ZS
dtype: float64
- name: NV.IND.TOTL.CD
dtype: float64
- name: NV.IND.TOTL.CN
dtype: float64
- name: NV.IND.TOTL.KD
dtype: float64
- name: NV.IND.TOTL.KD.ZG
dtype: float64
- name: NV.IND.TOTL.KN
dtype: float64
- name: NV.IND.TOTL.ZS
dtype: float64
- name: NV.MNF.CHEM.ZS.UN
dtype: float64
- name: NV.MNF.FBTO.ZS.UN
dtype: float64
- name: NV.MNF.MTRN.ZS.UN
dtype: float64
- name: NV.MNF.OTHR.ZS.UN
dtype: float64
- name: NV.MNF.TECH.ZS.UN
dtype: float64
- name: NV.MNF.TXTL.ZS.UN
dtype: float64
- name: NV.SRV.EMPL.KD
dtype: float64
- name: NV.SRV.TOTL.CD
dtype: float64
- name: NV.SRV.TOTL.CN
dtype: float64
- name: NV.SRV.TOTL.KD
dtype: float64
- name: NV.SRV.TOTL.KD.ZG
dtype: float64
- name: NV.SRV.TOTL.KN
dtype: float64
- name: NV.SRV.TOTL.ZS
dtype: float64
- name: NY.ADJ.DRES.GN.ZS
dtype: float64
- name: NY.ADJ.NNTY.CD
dtype: float64
- name: NY.ADJ.NNTY.KD
dtype: float64
- name: NY.ADJ.NNTY.KD.ZG
dtype: float64
- name: NY.ADJ.NNTY.PC.CD
dtype: float64
- name: NY.ADJ.NNTY.PC.KD
dtype: float64
- name: NY.ADJ.NNTY.PC.KD.ZG
dtype: float64
- name: NY.EXP.CAPM.KN
dtype: float64
- name: NY.GDP.DEFL.KD.ZG
dtype: float64
- name: NY.GDP.DEFL.KD.ZG.AD
dtype: float64
- name: NY.GDP.DEFL.ZS
dtype: float64
- name: NY.GDP.DEFL.ZS.AD
dtype: float64
- name: NY.GDP.DISC.CN
dtype: float64
- name: NY.GDP.DISC.KN
dtype: float64
- name: NY.GDP.FCST.CD
dtype: float64
- name: NY.GDP.FCST.CN
dtype: float64
- name: NY.GDP.FCST.KD
dtype: float64
- name: NY.GDP.FCST.KN
dtype: float64
- name: NY.GDP.MKTP.CD
dtype: float64
- name: NY.GDP.MKTP.CN
dtype: float64
- name: NY.GDP.MKTP.CN.AD
dtype: float64
- name: NY.GDP.MKTP.KD
dtype: float64
- name: NY.GDP.MKTP.KD.ZG
dtype: float64
- name: NY.GDP.MKTP.KN
dtype: float64
- name: NY.GDP.MKTP.PP.CD
dtype: float64
- name: NY.GDP.MKTP.PP.KD
dtype: float64
- name: NY.GDP.PCAP.CD
dtype: float64
- name: NY.GDP.PCAP.CN
dtype: float64
- name: NY.GDP.PCAP.KD
dtype: float64
- name: NY.GDP.PCAP.KD.ZG
dtype: float64
- name: NY.GDP.PCAP.KN
dtype: float64
- name: NY.GDP.PCAP.PP.CD
dtype: float64
- name: NY.GDP.PCAP.PP.KD
dtype: float64
- name: NY.GDS.TOTL.CD
dtype: float64
- name: NY.GDS.TOTL.CN
dtype: float64
- name: NY.GDS.TOTL.ZS
dtype: float64
- name: NY.GDY.TOTL.KN
dtype: float64
- name: NY.GNP.ATLS.CD
dtype: float64
- name: NY.GNP.MKTP.CD
dtype: float64
- name: NY.GNP.MKTP.CN
dtype: float64
- name: NY.GNP.MKTP.CN.AD
dtype: float64
- name: NY.GNP.MKTP.KD
dtype: float64
- name: NY.GNP.MKTP.KD.ZG
dtype: float64
- name: NY.GNP.MKTP.KN
dtype: float64
- name: NY.GNP.MKTP.PP.CD
dtype: float64
- name: NY.GNP.MKTP.PP.KD
dtype: float64
- name: NY.GNP.PCAP.CD
dtype: float64
- name: NY.GNP.PCAP.CN
dtype: float64
- name: NY.GNP.PCAP.KD
dtype: float64
- name: NY.GNP.PCAP.KD.ZG
dtype: float64
- name: NY.GNP.PCAP.KN
dtype: float64
- name: NY.GNP.PCAP.PP.CD
dtype: float64
- name: NY.GNP.PCAP.PP.KD
dtype: float64
- name: NY.GNS.ICTR.CD
dtype: float64
- name: NY.GNS.ICTR.CN
dtype: float64
- name: NY.GNS.ICTR.GN.ZS
dtype: float64
- name: NY.GNS.ICTR.ZS
dtype: float64
- name: NY.GSR.NFCY.CD
dtype: float64
- name: NY.GSR.NFCY.CN
dtype: float64
- name: NY.GSR.NFCY.KN
dtype: float64
- name: NY.TAX.NIND.CD
dtype: float64
- name: NY.TAX.NIND.CN
dtype: float64
- name: NY.TAX.NIND.KN
dtype: float64
- name: NY.TRF.NCTR.CD
dtype: float64
- name: NY.TRF.NCTR.CN
dtype: float64
- name: NY.TRF.NCTR.KN
dtype: float64
- name: NY.TTF.GNFS.KN
dtype: float64
- name: PA.NUS.ATLS
dtype: float64
- name: PA.NUS.FCRF
dtype: float64
- name: PA.NUS.PPP
dtype: float64
- name: PA.NUS.PPPC.RF
dtype: float64
- name: PA.NUS.PRVT.PP
dtype: float64
- name: PX.REX.REER
dtype: float64
- name: SI.RMT.COST.IB.ZS
dtype: float64
- name: SI.RMT.COST.OB.ZS
dtype: float64
- name: CORENS
dtype: float64
- name: CORESA
dtype: float64
- name: CPTOTNSXN
dtype: float64
- name: CPTOTSAXMZGY
dtype: float64
- name: CPTOTSAXN
dtype: float64
- name: CPTOTSAXNZGY
dtype: float64
- name: DMGSRMRCHNSXD
dtype: float64
- name: DMGSRMRCHSAXD
dtype: float64
- name: DPANUSLCU
dtype: float64
- name: DPANUSSPB
dtype: float64
- name: DPANUSSPF
dtype: float64
- name: DSTKMKTXD
dtype: float64
- name: DSTKMKTXN
dtype: float64
- name: DXGSRMRCHNSXD
dtype: float64
- name: DXGSRMRCHSAXD
dtype: float64
- name: IMPCOV
dtype: float64
- name: IPTOTNSKD
dtype: float64
- name: IPTOTSAKD
dtype: float64
- name: NEER
dtype: float64
- name: NYGDPMKTPSACD
dtype: float64
- name: NYGDPMKTPSACN
dtype: float64
- name: NYGDPMKTPSAKD
dtype: float64
- name: NYGDPMKTPSAKN
dtype: float64
- name: REER
dtype: float64
- name: RETSALESSA
dtype: float64
- name: TOT
dtype: float64
- name: JI.AGE.WAGE
dtype: float64
- name: JI.AGE.WAGE.OL
dtype: float64
- name: JI.AGR.WAGE.OL.ZS
dtype: float64
- name: JI.AGR.WAGE.ZS
dtype: float64
- name: JI.EMP.1564.OL.ZS
dtype: float64
- name: JI.EMP.1564.ZS
dtype: float64
- name: JI.EMP.AGRI.OL.ZS
dtype: float64
- name: JI.EMP.AGRI.ZS
dtype: float64
- name: JI.EMP.ARFC.OL.ZS
dtype: float64
- name: JI.EMP.ARFC.ZS
dtype: float64
- name: JI.EMP.CLRK.OL.ZS
dtype: float64
- name: JI.EMP.CLRK.ZS
dtype: float64
- name: JI.EMP.CNST.OL.ZS
dtype: float64
- name: JI.EMP.CNST.ZS
dtype: float64
- name: JI.EMP.COME.OL.ZS
dtype: float64
- name: JI.EMP.COME.ZS
dtype: float64
- name: JI.EMP.CRFT.OL.ZS
dtype: float64
- name: JI.EMP.CRFT.ZS
dtype: float64
- name: JI.EMP.ELEC.OL.ZS
dtype: float64
- name: JI.EMP.ELEC.ZS
dtype: float64
- name: JI.EMP.ELEM.OL.ZS
dtype: float64
- name: JI.EMP.ELEM.ZS
dtype: float64
- name: JI.EMP.FABU.OL.ZS
dtype: float64
- name: JI.EMP.FABU.ZS
dtype: float64
- name: JI.EMP.IFRM.OL.ZS
dtype: float64
- name: JI.EMP.IFRM.ZS
dtype: float64
- name: JI.EMP.INDU.OL.ZS
dtype: float64
- name: JI.EMP.INDU.ZS
dtype: float64
- name: JI.EMP.MACH.OL.ZS
dtype: float64
- name: JI.EMP.MACH.ZS
dtype: float64
- name: JI.EMP.MANF.OL.ZS
dtype: float64
- name: JI.EMP.MANF.ZS
dtype: float64
- name: JI.EMP.MINQ.OL.ZS
dtype: float64
- name: JI.EMP.MINQ.ZS
dtype: float64
- name: JI.EMP.OSRV.OL.ZS
dtype: float64
- name: JI.EMP.OSRV.ZS
dtype: float64
- name: JI.EMP.PADM.OL.ZS
dtype: float64
- name: JI.EMP.PADM.ZS
dtype: float64
- name: JI.EMP.PROF.OL.ZS
dtype: float64
- name: JI.EMP.PROF.ZS
dtype: float64
- name: JI.EMP.PUBS.OL.ZS
dtype: float64
- name: JI.EMP.PUBS.ZS
dtype: float64
- name: JI.EMP.SEOF.OL.ZS
dtype: float64
- name: JI.EMP.SEOF.ZS
dtype: float64
- name: JI.EMP.SERV.OL.ZS
dtype: float64
- name: JI.EMP.SERV.ZS
dtype: float64
- name: JI.EMP.SKAG.OL.ZS
dtype: float64
- name: JI.EMP.SKAG.ZS
dtype: float64
- name: JI.EMP.SVMK.OL.ZS
dtype: float64
- name: JI.EMP.SVMK.ZS
dtype: float64
- name: JI.EMP.TECH.OL.ZS
dtype: float64
- name: JI.EMP.TECH.ZS
dtype: float64
- name: JI.EMP.TOTL.SP.OL.ZS
dtype: float64
- name: JI.EMP.TOTL.SP.ZS
dtype: float64
- name: JI.EMP.TRCM.OL.ZS
dtype: float64
- name: JI.EMP.TRCM.ZS
dtype: float64
- name: JI.EMP.UNPD.NA.OL.ZS
dtype: float64
- name: JI.EMP.UNPD.NA.ZS
dtype: float64
- name: JI.EMP.WAGE.NA.OL.ZS
dtype: float64
- name: JI.EMP.WAGE.NA.ZS
dtype: float64
- name: JI.EMP.WAGE.OL.ZS
dtype: float64
- name: JI.EMP.WAGE.ZS
dtype: float64
- name: JI.IND.WAGE.OL.ZS
dtype: float64
- name: JI.IND.WAGE.ZS
dtype: float64
- name: JI.SRV.WAGE.OL.ZS
dtype: float64
- name: JI.SRV.WAGE.ZS
dtype: float64
- name: JI.TLF.ACTI.OL.ZS
dtype: float64
- name: JI.TLF.ACTI.ZS
dtype: float64
- name: JI.TLF.TOTL
dtype: float64
- name: JI.TLF.TOTL.OL
dtype: float64
- name: JI.UEM.1564.OL.ZS
dtype: float64
- name: JI.UEM.1564.ZS
dtype: float64
- name: JI.WAG.PBPV
dtype: float64
- name: JI.WAG.PBPV.OL
dtype: float64
- name: GFDD.AI.01
dtype: float64
- name: GFDD.AI.02
dtype: float64
- name: GFDD.AI.03
dtype: float64
- name: GFDD.AI.04
dtype: float64
- name: GFDD.AI.05
dtype: float64
- name: GFDD.AI.06
dtype: float64
- name: GFDD.AI.07
dtype: float64
- name: GFDD.AI.08
dtype: float64
- name: GFDD.AI.09
dtype: float64
- name: GFDD.AI.10
dtype: float64
- name: GFDD.AI.11
dtype: float64
- name: GFDD.AI.12
dtype: float64
- name: GFDD.AI.13
dtype: float64
- name: GFDD.AI.14
dtype: float64
- name: GFDD.AI.15
dtype: float64
- name: GFDD.AI.16
dtype: float64
- name: GFDD.AI.17
dtype: float64
- name: GFDD.AI.18
dtype: float64
- name: GFDD.AI.19
dtype: float64
- name: GFDD.AI.20
dtype: float64
- name: GFDD.AI.21
dtype: float64
- name: GFDD.AI.22
dtype: float64
- name: GFDD.AI.23
dtype: float64
- name: GFDD.AI.24
dtype: float64
- name: GFDD.AI.25
dtype: float64
- name: GFDD.AI.26
dtype: float64
- name: GFDD.AI.27
dtype: float64
- name: GFDD.AI.28
dtype: float64
- name: GFDD.AI.29
dtype: float64
- name: GFDD.AI.30
dtype: float64
- name: GFDD.AI.31
dtype: float64
- name: GFDD.AI.32
dtype: float64
- name: GFDD.AI.33
dtype: float64
- name: GFDD.AI.34
dtype: float64
- name: GFDD.AI.35
dtype: float64
- name: GFDD.AI.36
dtype: float64
- name: GFDD.AM.01
dtype: float64
- name: GFDD.AM.02
dtype: float64
- name: GFDD.AM.03
dtype: float64
- name: GFDD.DI.01
dtype: float64
- name: GFDD.DI.02
dtype: float64
- name: GFDD.DI.03
dtype: float64
- name: GFDD.DI.04
dtype: float64
- name: GFDD.DI.05
dtype: float64
- name: GFDD.DI.06
dtype: float64
- name: GFDD.DI.07
dtype: float64
- name: GFDD.DI.08
dtype: float64
- name: GFDD.DI.09
dtype: float64
- name: GFDD.DI.10
dtype: float64
- name: GFDD.DI.11
dtype: float64
- name: GFDD.DI.12
dtype: float64
- name: GFDD.DI.13
dtype: float64
- name: GFDD.DI.14
dtype: float64
- name: GFDD.DM.01
dtype: float64
- name: GFDD.DM.02
dtype: float64
- name: GFDD.DM.03
dtype: float64
- name: GFDD.DM.04
dtype: float64
- name: GFDD.DM.05
dtype: float64
- name: GFDD.DM.06
dtype: float64
- name: GFDD.DM.07
dtype: float64
- name: GFDD.DM.08
dtype: float64
- name: GFDD.DM.09
dtype: float64
- name: GFDD.DM.10
dtype: float64
- name: GFDD.EI.01
dtype: float64
- name: GFDD.EI.02
dtype: float64
- name: GFDD.EI.03
dtype: float64
- name: GFDD.EI.04
dtype: float64
- name: GFDD.EI.05
dtype: float64
- name: GFDD.EI.06
dtype: float64
- name: GFDD.EI.07
dtype: float64
- name: GFDD.EI.08
dtype: float64
- name: GFDD.EI.09
dtype: float64
- name: GFDD.EI.10
dtype: float64
- name: GFDD.EM.01
dtype: float64
- name: GFDD.OI.01
dtype: float64
- name: GFDD.OI.02
dtype: float64
- name: GFDD.OI.06
dtype: float64
- name: GFDD.OI.07
dtype: float64
- name: GFDD.OI.08
dtype: float64
- name: GFDD.OI.09
dtype: float64
- name: GFDD.OI.10
dtype: float64
- name: GFDD.OI.11
dtype: float64
- name: GFDD.OI.12
dtype: float64
- name: GFDD.OI.13
dtype: float64
- name: GFDD.OI.14
dtype: float64
- name: GFDD.OI.15
dtype: float64
- name: GFDD.OI.16
dtype: float64
- name: GFDD.OI.17
dtype: float64
- name: GFDD.OI.18
dtype: float64
- name: GFDD.OI.19
dtype: float64
- name: GFDD.OM.01
dtype: float64
- name: GFDD.OM.02
dtype: float64
- name: GFDD.SI.01
dtype: float64
- name: GFDD.SI.02
dtype: float64
- name: GFDD.SI.03
dtype: float64
- name: GFDD.SI.04
dtype: float64
- name: GFDD.SI.05
dtype: float64
- name: GFDD.SI.06
dtype: float64
- name: GFDD.SI.07
dtype: float64
- name: GFDD.SM.01
dtype: float64
- name: Country
dtype: string
- name: Year
dtype: string
splits:
- name: train
num_bytes: 4753965
num_examples: 1122
download_size: 2612915
dataset_size: 4753965
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zephyr-1111/x_dataset_0708150 | zephyr-1111 | 2025-06-24T14:27:35Z | 1,323 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-25T07:15:24Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** zephyr-1111/x_dataset_0708150
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5DaUuXQ38fukz4fZk7GZsKqAJC8Zum8K3HMhKirvjRGPxwTq
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{zephyr-11112025datauniversex_dataset_0708150,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={zephyr-1111},
year={2025},
url={https://huggingface.co/datasets/zephyr-1111/x_dataset_0708150},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 1771220
- **Date Range:** 2025-01-02T00:00:00Z to 2025-06-14T00:00:00Z
- **Last Updated:** 2025-06-24T14:27:34Z
### Data Distribution
- Tweets with hashtags: 13.88%
- Tweets without hashtags: 86.12%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 1110845 | 81.87% |
| 2 | #sixtonesann | 26152 | 1.93% |
| 3 | #thenextprinceep7 | 22220 | 1.64% |
| 4 | #ムサシノ輪舞曲 | 11327 | 0.83% |
| 5 | #サクサクヒムヒム | 10727 | 0.79% |
| 6 | #दहेज_दानव_से_मुक्ति | 10291 | 0.76% |
| 7 | #호시의_후반전도_함께할게 | 9115 | 0.67% |
| 8 | #riyadh | 8184 | 0.60% |
| 9 | #thameposeriesep9 | 7605 | 0.56% |
| 10 | #tiktok | 6521 | 0.48% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-25T07:15:23Z | 414446 | 414446 |
| 2025-01-25T07:15:50Z | 414446 | 828892 |
| 2025-02-18T03:37:16Z | 463345 | 1292237 |
| 2025-06-24T14:27:34Z | 478983 | 1771220 |
|
zephyr-1111/x_dataset_070513 | zephyr-1111 | 2025-06-24T14:24:27Z | 913 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-25T07:16:21Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** zephyr-1111/x_dataset_070513
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5Fc1jF83RPejiQ6nZyKdYY4dknDY6A8xPdje3GhF1CZzMNuv
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{zephyr-11112025datauniversex_dataset_070513,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={zephyr-1111},
year={2025},
url={https://huggingface.co/datasets/zephyr-1111/x_dataset_070513},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 2686761
- **Date Range:** 2025-01-02T00:00:00Z to 2025-06-14T00:00:00Z
- **Last Updated:** 2025-06-24T14:24:26Z
### Data Distribution
- Tweets with hashtags: 10.92%
- Tweets without hashtags: 89.08%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 1110845 | 79.10% |
| 2 | #sixtonesann | 26152 | 1.86% |
| 3 | #thenextprinceep7 | 22220 | 1.58% |
| 4 | #ムサシノ輪舞曲 | 11327 | 0.81% |
| 5 | #サクサクヒムヒム | 10727 | 0.76% |
| 6 | #दहेज_दानव_से_मुक्ति | 10291 | 0.73% |
| 7 | #riyadh | 9169 | 0.65% |
| 8 | #호시의_후반전도_함께할게 | 9115 | 0.65% |
| 9 | #tiktok | 9014 | 0.64% |
| 10 | #箱根駅伝 | 8147 | 0.58% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-25T07:15:23Z | 414446 | 414446 |
| 2025-01-25T07:15:50Z | 414446 | 828892 |
| 2025-01-25T07:16:19Z | 453526 | 1282418 |
| 2025-01-25T07:16:50Z | 453526 | 1735944 |
| 2025-02-18T03:38:23Z | 471834 | 2207778 |
| 2025-06-24T14:24:26Z | 478983 | 2686761 |
|
jinaai/student-enrollment_beir | jinaai | 2025-06-24T13:55:29Z | 1 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:eu"
] | [] | 2025-06-19T11:33:44Z | null | ---
dataset_info:
- config_name: corpus
features:
- name: corpus-id
dtype: int64
- name: image
dtype: image
splits:
- name: test
num_bytes: 468914392.0
num_examples: 489
download_size: 468924871
dataset_size: 468914392.0
- config_name: qrels
features:
- name: query-id
dtype: int64
- name: corpus-id
dtype: int64
- name: score
dtype: int64
splits:
- name: test
num_bytes: 24000
num_examples: 1000
download_size: 9957
dataset_size: 24000
- config_name: queries
features:
- name: query-id
dtype: int64
- name: query
dtype: string
splits:
- name: test
num_bytes: 119703
num_examples: 1000
download_size: 28021
dataset_size: 119703
configs:
- config_name: corpus
data_files:
- split: test
path: default/corpus/test-*
- config_name: qrels
data_files:
- split: test
path: default/qrels/test-*
- config_name: queries
data_files:
- split: test
path: default/queries/test-*
---
This is a copy of https://huggingface.co/datasets/jinaai/student-enrollment reformatted into the BEIR format. For any further information like license, please refer to the original dataset.
# Disclaimer
This dataset may contain publicly available images or text data. All data is provided for research and educational purposes only. If you are the rights holder of any content and have concerns regarding intellectual property or copyright, please contact us at "support-data (at) jina.ai" for removal. We do not collect or process personal, sensitive, or private information intentionally. If you believe this dataset includes such content (e.g., portraits, location-linked images, medical or financial data, or NSFW content), please notify us, and we will take appropriate action.
# Copyright
All rights are reserved to the original authors of the documents.
|
jinaai/jdocqa | jinaai | 2025-06-24T13:52:34Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2403.19454",
"region:eu"
] | [] | 2025-06-10T12:45:03Z | null | ---
dataset_info:
features:
- name: query
dtype: string
- name: image
dtype: image
- name: image_filename
dtype: string
- name: text_description
dtype: string
splits:
- name: test
num_bytes: 237420405.0
num_examples: 758
download_size: 237236360
dataset_size: 237420405.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Japanese Document Retrieval
Document Question answering from [JDocQAJP dataset](https://huggingface.co/datasets/jlli/JDocQA-nonbinary), test split. The `text_description` column contains OCR text extracted from the images using EasyOCR.
Paper: https://arxiv.org/abs/2403.19454
Questions: 758
Language: Japanese
Example:
```python
{
'query': '八王子神社は「はちおっつぁん」と呼ばれ住民に親しまれていますが、事故が起きたような言い伝えはありますか。\n解答は自由に記述してください。',
'image_filename': 'page_0.jpg',
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=3814x5342 at 0x7B9DA7BD0B20>,
'answer': '牛や馬の商売をしている人が仏像を買い、拝んでいたところ「みんなが幸せになれるようにしなさい」と夢に金物が現れ、八つのかまを重ねて仏像を入れ、その上にモミの木を植え八王子神社と名付けたところ作物が良く実りましたが、馬鹿にしたよその村人が馬から落ちて亡くなったといわれています。'
}
```
# Disclaimer
This dataset may contain publicly available images or text data. All data is provided for research and educational purposes only. If you are the rights holder of any content and have concerns regarding intellectual property or copyright, please contact us at "support-data (at) jina.ai" for removal. We do not collect or process personal, sensitive, or private information intentionally. If you believe this dataset includes such content (e.g., portraits, location-linked images, medical or financial data, or NSFW content), please notify us, and we will take appropriate action.
# Copyright
All rights are reserved to the original authors of the documents.
|
jinaai/europeana-nl-legal | jinaai | 2025-06-24T13:52:19Z | 65 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:eu"
] | [] | 2024-12-18T15:58:26Z | null | ---
dataset_info:
features:
- name: query
dtype: string
- name: image
dtype: image
- name: image_filename
dtype: string
- name: links
dtype: string
- name: attributions
dtype: string
- name: text_description
dtype: string
splits:
- name: test
num_bytes: 132202700.0
num_examples: 380
download_size: 90299186
dataset_size: 132202700.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Europeana Legal Dutch Documents Dataset
This dataset was created from records of the [Europeana online collection](https://europeana.eu) by selecting scans of Dutch historical legal documents. These documents consist of scans of typed paper documents.
Queries are generated with Qwen2b, and manually verified by a human annotator. Attributions to the sources are added in the `attributions` column of each dataset item, and links to the original document in the `links` column.
The `text_description` column contains OCR text extracted from the images using EasyOCR.
Example:
```
{
'query': 'In welk jaar werd de Koninklijke Boodschap van Colombia gesloten?',
'image_filename': 'images/9200401__BibliographicResource_1000056988753.jpg',
'links': 'https://www.europeana.eu/en/item/9200401/BibliographicResource_1000056988753',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=4118x5683 at 0x11FCD9950>,
'attributions': Tractaat van vriendschap, handel en scheepvaart, gesloten met de Republiek Colombia - https://www.europeana.eu/item/9200401/BibliographicResource_1000056988753. Koninklijke Bibliotheek - http://www.statengeneraaldigitaal.nl/document?id=sgd:mpeg21:18291830:0000325. Public Domain Mark - http://creativecommons.org/publicdomain/mark/1.0/',
}
```
# Disclaimer
This dataset may contain publicly available images or text data. All data is provided for research and educational purposes only. If you are the rights holder of any content and have concerns regarding intellectual property or copyright, please contact us at "support-data (at) jina.ai" for removal. We do not collect or process personal, sensitive, or private information intentionally. If you believe this dataset includes such content (e.g., portraits, location-linked images, medical or financial data, or NSFW content), please notify us, and we will take appropriate action.
# Copyright
All rights are reserved to the original authors of the documents.
|
jinaai/donut_vqa | jinaai | 2025-06-24T13:51:48Z | 21 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:eu"
] | [] | 2025-06-10T12:40:33Z | null | ---
dataset_info:
features:
- name: query
dtype: string
- name: image
dtype: image
- name: image_filename
dtype: string
- name: text_description
dtype: string
splits:
- name: test
num_bytes: 100347289.0
num_examples: 800
download_size: 91711782
dataset_size: 100347289.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# DonutVQA Dataset
This dataset is derived from the [donut-vqa dataset](https://huggingface.co/datasets/warshakhan/donut_vqa_ISynHMP), reformatting the test split with modified field names, so that it can be used in the ViDoRe benchmark.
The `text_description` column contains OCR text extracted from the images using EasyOCR.
# Disclaimer
This dataset may contain publicly available images or text data. All data is provided for research and educational purposes only. If you are the rights holder of any content and have concerns regarding intellectual property or copyright, please contact us at "support-data (at) jina.ai" for removal. We do not collect or process personal, sensitive, or private information intentionally. If you believe this dataset includes such content (e.g., portraits, location-linked images, medical or financial data, or NSFW content), please notify us, and we will take appropriate action.
# Copyright
All rights are reserved to the original authors of the documents.
|
czl/yongchun_public_gym | czl | 2025-06-24T13:30:18Z | 362 | 0 | [
"task_categories:time-series-forecasting",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"time-series-forecasting"
] | 2025-06-04T12:54:14Z | null | ---
license: apache-2.0
task_categories:
- time-series-forecasting
language:
- en
- zh
pretty_name: 永春活力館健身房人數 Dataset
size_categories:
- 1K<n<10K
---
# 永春活力館健身房人數 Dataset
The Timestamp is in seconds
Source: https://xysc.teamxports.com/
|
sergiov2000/eval_act_so100_leaderarm_a4 | sergiov2000 | 2025-06-24T13:14:54Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-06-24T13:14:47Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 848,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.above": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.side": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
nunoFcul/yourbench_advanced_example | nunoFcul | 2025-06-24T11:43:09Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T11:00:34Z | null | ---
dataset_info:
- config_name: chunked
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_chunk_summaries
sequence: string
- name: chunk_summaries
sequence: string
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
- name: chunks
list:
- name: chunk_id
dtype: string
- name: chunk_text
dtype: string
- name: multihop_chunks
list:
- name: chunk_ids
sequence: string
- name: chunks_text
sequence: string
splits:
- name: train
num_bytes: 1137063
num_examples: 40
download_size: 87758
dataset_size: 1137063
- config_name: ingested
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
splits:
- name: train
num_bytes: 72116
num_examples: 8
download_size: 13742
dataset_size: 72116
- config_name: lighteval
features:
- name: question
dtype: string
- name: additional_instructions
dtype: string
- name: ground_truth_answer
dtype: string
- name: gold
sequence: string
- name: choices
sequence: 'null'
- name: question_category
dtype: string
- name: kind
dtype: string
- name: estimated_difficulty
dtype: int64
- name: citations
sequence: string
- name: document_id
dtype: string
- name: chunk_ids
sequence: string
- name: question_generating_model
dtype: string
- name: chunks
sequence: string
- name: document
dtype: string
- name: document_summary
dtype: string
- name: answer_citation_score
dtype: float64
- name: chunk_citation_score
dtype: float64
- name: citation_score
dtype: float64
splits:
- name: train
num_bytes: 3107530
num_examples: 256
download_size: 77196
dataset_size: 3107530
- config_name: multi_hop_questions
features:
- name: document_id
dtype: string
- name: additional_instructions
dtype: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: raw_response
dtype: string
- name: citations
sequence: string
- name: source_chunk_ids
sequence: string
splits:
- name: train
num_bytes: 214485
num_examples: 35
download_size: 48172
dataset_size: 214485
- config_name: single_shot_questions
features:
- name: document_id
dtype: string
- name: additional_instructions
dtype: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: raw_response
dtype: string
- name: citations
sequence: string
- name: chunk_id
dtype: string
splits:
- name: train
num_bytes: 321396
num_examples: 69
download_size: 62477
dataset_size: 321396
- config_name: summarized
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_chunk_summaries
sequence: string
- name: chunk_summaries
sequence: string
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
splits:
- name: train
num_bytes: 249342
num_examples: 20
download_size: 56704
dataset_size: 249342
configs:
- config_name: chunked
data_files:
- split: train
path: chunked/train-*
- config_name: ingested
data_files:
- split: train
path: ingested/train-*
- config_name: lighteval
data_files:
- split: train
path: lighteval/train-*
- config_name: multi_hop_questions
data_files:
- split: train
path: multi_hop_questions/train-*
- config_name: single_shot_questions
data_files:
- split: train
path: single_shot_questions/train-*
- config_name: summarized
data_files:
- split: train
path: summarized/train-*
---
|
william-1111/x_dataset_0104179 | william-1111 | 2025-06-24T11:16:45Z | 985 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-27T06:43:47Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** william-1111/x_dataset_0104179
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5CnoeLSSFZq9jmZfuKT7WpHoEEQJKvX3Nf4ZWFqiqjLZpfeS
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{william-11112025datauniversex_dataset_0104179,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={william-1111},
year={2025},
url={https://huggingface.co/datasets/william-1111/x_dataset_0104179},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 1455999
- **Date Range:** 2025-01-02T00:00:00Z to 2025-06-14T00:00:00Z
- **Last Updated:** 2025-06-24T11:16:45Z
### Data Distribution
- Tweets with hashtags: 23.58%
- Tweets without hashtags: 76.42%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 1112737 | 76.42% |
| 2 | #riyadh | 23097 | 1.59% |
| 3 | #マテムり | 15966 | 1.10% |
| 4 | #pbbcollab6thduoeviction | 11023 | 0.76% |
| 5 | #tiktok | 8638 | 0.59% |
| 6 | #箱根駅伝 | 8147 | 0.56% |
| 7 | #thameposeriesep9 | 7605 | 0.52% |
| 8 | #wtcfinal2025 | 6398 | 0.44% |
| 9 | #first_showcase | 6311 | 0.43% |
| 10 | #ad | 5465 | 0.38% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T06:44:27Z | 471976 | 471976 |
| 2025-02-18T03:40:05Z | 506494 | 978470 |
| 2025-06-24T11:16:45Z | 477529 | 1455999 |
|
TitouanCh/drug-seq-u2os-novartis | TitouanCh | 2025-06-24T10:47:18Z | 114 | 0 | [
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"biology"
] | [] | 2025-06-20T13:15:21Z | null | ---
license: mit
tags:
- biology
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: counts
sequence: int32
- name: counts_norm
sequence: float32
- name: counts_log
sequence: float32
- name: counts_log_norm
sequence: float32
- name: gene_names
sequence: string
- name: control_counts
sequence: float32
- name: control_counts_norm
sequence: float32
- name: control_counts_log
sequence: float32
- name: control_counts_log_norm
sequence: float32
- name: delta_counts
sequence: float32
- name: delta_counts_norm
sequence: float32
- name: delta_counts_log
sequence: float32
- name: delta_counts_log_norm
sequence: float32
- name: cell_line
dtype: string
- name: perturbation
dtype: string
- name: compound_concentration
dtype: float64
- name: compound_unit
dtype: string
- name: compound_smiles
dtype: string
- name: mechanism
dtype: string
- name: moa
dtype: string
- name: biological_effect
dtype: string
- name: experimental_id
dtype: string
- name: timepoint
dtype: string
- name: text
dtype: string
- name: text_embeddings
sequence: float32
- name: chembert_embeddings
sequence: float32
splits:
- name: train
num_bytes: 176083286910
num_examples: 49392
download_size: 65016664023
dataset_size: 176083286910
---
**I AM NOT AFFILIATED WITH NOVARTIS IN ANY WAY; THIS IS SIMPLY AN UPLOAD OF THEIR DATASET, "[NOVARTIS/DRUG-SEQ U2OS MOABOX DATASET](https://zenodo.org/records/14291446)."**
# Novartis DRUG-seq U2OS MoABox Dataset
This dataset profiles transcriptomic responses of the **U-2 OS** human osteosarcoma cell line to a broad collection of small molecule perturbations. It contains **49,392 observations** spanning **3,742 unique compounds** tested at **4 distinct dosages + `0.0`**, each annotated with their respective **mechanisms of action (MoA)**.
Each observation records gene expression data for **59,594 genes**.
The dataset was generated using the DRUG-seq platform, enabling high-throughput, unbiased transcriptomic readouts suited for drug discovery applications.
## Additional Information
- Perturbation dosages range across 4 unique concentration values + `0.0`.
- Observations include multiple experimental replicates and plate layouts.
- **Normalized counts** were scaled so that the total expression per cell sums to `1e4`.
- **Control counts** represent the average expression of each gene across all control cells.
- **Delta values** are computed as the difference between each sample's expression and the corresponding control mean.
- **SMILES** strings and **mechanism of action (MoA)** annotations curated by Novartis, retrieved from the [ChEMBL](https://www.ebi.ac.uk/chembl/) database and enhanced with additional sources.
## Citations
> Hadjikyriacou, A., Yang, C., Henault, M., *et al.*
> **Novartis DRUG-seq U2OS MoABox Dataset**
> [Novartis DRUG-seq GitHub Repository](https://github.com/Novartis/DRUG-seq)
> Hadjikyriacou, A., Yang, C., Henault, M., Ge, R., Mansur, L., Lindeman, A., Russ, C., Renner, S., Hild, M., Jenkins, J., Gubser-Keller, C., Li, J., Ho, D. J., Neri, M., Sigoillot, F. D., & Ihry, R. (2025).
> **Novartis/DRUG-seq U2OS MoABox Dataset (1.0.0) [Data set].** Zenodo.
> https://doi.org/10.5281/zenodo.14291446
> Li, J., Ho, D. J., Henault, M., Yang, C., Neri, M., Ge, R., Renner, S., Mansur, L., Lindeman, A., Tumkaya, T., Russ, C., Hild, M., Gubser Keller, C., Jenkins, J. L., Worringer, K. A., Sigoillot, F. D., & Ihry, R. J. (2021).
> **DRUG-seq Provides Unbiased Biological Activity Readouts for Drug Discovery.** bioRxiv.
> https://doi.org/10.1101/2021.06.07.447456
> [Full text PDF](https://www.biorxiv.org/content/early/2021/06/08/2021.06.07.447456.full.pdf) |
lowyun-izeno/guanaco-llama2-1k-eng | lowyun-izeno | 2025-06-24T10:22:59Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T09:33:48Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1751774.1023466215
num_examples: 1000
download_size: 951148
dataset_size: 1751774.1023466215
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aketen0654/sendeneme | aketen0654 | 2025-06-24T10:22:37Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T10:21:08Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction,"input","Response"
dtype: string
splits:
- name: train
num_bytes: 135991.45890410958
num_examples: 262
- name: test
num_bytes: 15571.54109589041
num_examples: 30
download_size: 75411
dataset_size: 151563.0
---
# Dataset Card for "sendeneme"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Boadiwaa/interaction-log2 | Boadiwaa | 2025-06-24T10:10:29Z | 29 | 0 | [
"size_categories:n<1K",
"format:csv",
"modality:audio",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-06-04T13:06:11Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
louisbrulenaudet/code-domaine-etat | louisbrulenaudet | 2025-06-24T09:32:02Z | 453 | 0 | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code du domaine de l'Etat"
] | [
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] | 2024-03-25T20:35:57Z | null | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code du domaine de l'Etat
source_datasets:
- original
pretty_name: Code du domaine de l'Etat
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code du domaine de l'Etat, non-instruct (2025-06-23)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). |
louisbrulenaudet/code-commande-publique | louisbrulenaudet | 2025-06-24T09:31:54Z | 467 | 0 | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/1458",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code de la commande publique"
] | [
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] | 2023-12-12T18:59:37Z | null | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code de la commande publique
source_datasets:
- original
pretty_name: Code de la commande publique
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code de la commande publique, non-instruct (2025-06-23)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). |
louisbrulenaudet/code-civil | louisbrulenaudet | 2025-06-24T09:31:54Z | 467 | 1 | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/1442",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code civil"
] | [
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] | 2023-12-12T01:26:22Z | null | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code civil
source_datasets:
- original
pretty_name: Code civil
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code civil, non-instruct (2025-06-23)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). |
louisbrulenaudet/code-aviation-civile | louisbrulenaudet | 2025-06-24T09:31:53Z | 468 | 1 | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code de l'aviation civile"
] | [
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] | 2024-03-25T19:06:39Z | null | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code de l'aviation civile
source_datasets:
- original
pretty_name: Code de l'aviation civile
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code de l'aviation civile, non-instruct (2025-06-23)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). |
littleGuagua/x_dataset_39615 | littleGuagua | 2025-06-24T07:52:27Z | 830 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T13:52:27Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** littleGuagua/x_dataset_39615
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5DcWuS46Y4kwHXeqaALnhFuxthsAyeQa2co4mH41c2SnvpxK
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{littleGuagua2025datauniversex_dataset_39615,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={littleGuagua},
year={2025},
url={https://huggingface.co/datasets/littleGuagua/x_dataset_39615},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 51398762
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-13T00:00:00Z
- **Last Updated:** 2025-02-18T16:38:41Z
### Data Distribution
- Tweets with hashtags: 38.97%
- Tweets without hashtags: 61.03%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 31368286 | 61.03% |
| 2 | #riyadh | 282374 | 0.55% |
| 3 | #zelena | 215162 | 0.42% |
| 4 | #tiktok | 189061 | 0.37% |
| 5 | #bbb25 | 139383 | 0.27% |
| 6 | #ad | 107606 | 0.21% |
| 7 | #theheartkillersep10 | 72118 | 0.14% |
| 8 | #bbmzansi | 67297 | 0.13% |
| 9 | #pr | 61195 | 0.12% |
| 10 | #theheartkillersep9 | 57936 | 0.11% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T13:53:02Z | 1831074 | 1831074 |
| 2025-01-30T01:56:02Z | 8072120 | 9903194 |
| 2025-02-02T13:58:25Z | 7485404 | 17388598 |
| 2025-02-06T02:02:01Z | 8826537 | 26215135 |
| 2025-02-09T14:05:24Z | 8362749 | 34577884 |
| 2025-02-13T02:10:30Z | 6902157 | 41480041 |
| 2025-02-16T13:55:11Z | 8605299 | 50085340 |
| 2025-02-18T01:07:04Z | 691061 | 50776401 |
| 2025-02-18T16:38:41Z | 622361 | 51398762 |
|
UGRIP-LM-Polygraph/medmcqa-direct | UGRIP-LM-Polygraph | 2025-06-24T07:45:55Z | 53 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-19T14:29:06Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 79149073
num_examples: 182822
- name: test
num_bytes: 2522307
num_examples: 6150
- name: validation
num_bytes: 1884252
num_examples: 4183
download_size: 27097865
dataset_size: 83555632
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
laxmacl/synthetic-math-docs-rigorous-20250624_125646 | laxmacl | 2025-06-24T07:29:59Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T07:29:54Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: imagewidth
dtype: int64
- name: pdf_name
dtype: string
- name: page_number
dtype: int64
- name: markdown
dtype: string
- name: html
dtype: string
- name: layout
dtype: string
- name: lines
dtype: string
- name: images
dtype: string
- name: equations
dtype: string
- name: tables
dtype: string
- name: page_size
dtype: string
- name: content_list
dtype: string
- name: base_layout_detection
dtype: string
- name: pdf_info
dtype: string
- name: system_prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2777444.0
num_examples: 7
download_size: 1497044
dataset_size: 2777444.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sid003/coppercapON | sid003 | 2025-06-24T07:28:09Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-23T12:16:46Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 2,
"total_frames": 2319,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.wrist.left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.side": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
laxmacl/synthetic-math-docs-rigorous-20250624_125218 | laxmacl | 2025-06-24T07:25:08Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T07:24:53Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: imagewidth
dtype: int64
- name: pdf_name
dtype: string
- name: page_number
dtype: int64
- name: markdown
dtype: string
- name: html
dtype: string
- name: layout
dtype: string
- name: lines
dtype: string
- name: images
dtype: string
- name: equations
dtype: string
- name: tables
dtype: string
- name: page_size
dtype: string
- name: content_list
dtype: string
- name: base_layout_detection
dtype: string
- name: pdf_info
dtype: string
- name: system_prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2350054.0
num_examples: 7
download_size: 1318202
dataset_size: 2350054.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sistemas-upta/redes_instruction_dataset_sentence_t_rw2 | sistemas-upta | 2025-06-24T05:33:47Z | 0 | 0 | [
"region:us"
] | [] | 2025-06-24T05:31:58Z | null | ---
dataset_info:
features:
- name: Instruction
dtype: string
- name: Response
dtype: string
- name: Instruction_Len
dtype: int64
- name: Response_Len
dtype: int64
splits:
- name: train
num_bytes: 21384
num_examples: 102
download_size: 13561
dataset_size: 21384
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yoonholee/wikispeedia-paths | yoonholee | 2025-06-24T05:24:16Z | 0 | 0 | [
"region:us"
] | [] | 2025-06-24T05:17:32Z | null | ---
dataset_info:
features:
- name: duration_sec
dtype: int64
- name: path
sequence: string
- name: finished
dtype: bool
- name: rating
dtype: string
splits:
- name: train
num_bytes: 5688990
num_examples: 51318
download_size: 1526382
dataset_size: 5688990
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ConquestAce/wildcards | ConquestAce | 2025-06-24T04:36:48Z | 587 | 1 | [
"license:unlicense",
"size_categories:10K<n<100K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-03-14T06:38:12Z | null | ---
license: unlicense
---
# 🎴 Wildcard Prompt Dataset for Natural Language & Danbooru
A curated collection of wildcard prompt files designed for prompt templating systems like [PromptHero Wildcards](https://github.com/PromptHero/wildcards), [A1111 Dynamic Prompts](https://github.com/adieyal/sd-dynamic-prompts), and similar tools used in text-to-image generation (e.g. Stable Diffusion). Organized for both **natural-language-friendly** users and **Danbooru-style** taggers.
---
## 📁 Structure
The repository is structured into two main directories (example):
```
wildcards/
├── natural-language/
│ ├── artists.txt
│ ├── characters.txt
│ ├── clothing.txt
│ ├── styles.txt
│ └── poses.txt
├── danbooru/
| ├── artists.txt
| ├──characters.txt
| ├──clothing.txt
| ├──styles.txt
└── └──poses.txt
````
Each `.txt` file contains a list of interchangeable keywords or phrases, one per line. These are intended to be referenced in prompts using double curly braces syntax:
## 📌 Notes
* Intended for artistic/educational research, especially in generative AI and prompt engineering.
* Contributions welcome via PRs or forks.
Check out: https://conquestace.com/wildcarder/ to see the wildcards in action.
|
youssefkhalil320/pairs_three_scores_v15 | youssefkhalil320 | 2025-06-24T04:30:59Z | 9 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-23T00:33:20Z | null | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 546924841
num_examples: 9500043
- name: eval
num_bytes: 28785839
num_examples: 500003
download_size: 588338366
dataset_size: 575710680
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
|
sleeping-ai/Sleeping-DISCO-9M | sleeping-ai | 2025-06-24T04:29:32Z | 707 | 7 | [
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2506.14293",
"region:us"
] | [] | 2025-02-23T08:34:40Z | null | ---
configs:
- config_name: a
data_files: "a.jsonl"
- config_name: b
data_files: "b.jsonl"
- config_name: c
data_files: "c.jsonl"
- config_name: d
data_files: "d.jsonl"
- config_name: e
data_files: "e.jsonl"
- config_name: f
data_files: "f.jsonl"
- config_name: g
data_files: "g.jsonl"
- config_name: h
data_files: "h.jsonl"
- config_name: i
data_files: "i.jsonl"
- config_name: j
data_files: "j.jsonl"
- config_name: k
data_files: "k.jsonl"
- config_name: l
data_files: "l.jsonl"
- config_name: m
data_files: "m.jsonl"
- config_name: n
data_files: "n.jsonl"
- config_name: o
data_files: "o.jsonl"
- config_name: p
data_files: "p.jsonl"
- config_name: q
data_files: "q.jsonl"
- config_name: r
data_files: "r.jsonl"
- config_name: s
data_files: "s.jsonl"
- config_name: t
data_files: "t.jsonl"
- config_name: u
data_files: "u.jsonl"
- config_name: v
data_files: "v.jsonl"
- config_name: w
data_files: "w.jsonl"
- config_name: x
data_files: "x.jsonl"
- config_name: y
data_files: "y.jsonl"
- config_name: z
data_files: "z.jsonl"
---
# Sleeping-DISCO-9M
**Sleeping-DISCO-9M** is a large-scale foundation dataset for **generative music modeling**, featuring **9 million songs** along with associated metadata, lyric embeddings, and song IDs. These IDs backlink to the original Genius pages, where the data was sourced.
## 🔹 Dataset Structure
**Sleeping-DISCO** is split into two components:
### 1. Sleeping-DISCO-Public
- Metadata for 8.89M songs
- Lyric embeddings
- YouTube video links for each song
- YouTube video metadata
### 2. Sleeping-DISCO-Private *(restricted)*
- Full lyrics
- Genius annotations
> ⚠️ Lyrics and annotations are **not included** in the public release. Access is available **only** to verified academic or research institutions for a limited period, upon request.
To request access, please email: **[sleeping4cat@gmail.com](mailto:sleeping4cat@gmail.com)**
## 📄 Paper
Read the first-version research paper on arXiv:
👉 [https://arxiv.org/abs/2506.14293](https://arxiv.org/abs/2506.14293)
A full arXiv + conference version will be released in **2026**.
## ⚖️ License
This dataset is released under the **Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)** license.
More details: [https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en)
- ✅ **Attribution required**
- 🚫 **Non-commercial use only**
- 🚫 **No derivatives or redistribution allowed** unless by the original authors.
> For academic access to the private subset, contact: **sleeping4cat@gmail.com**
|
hazyresearch/MMLU-Pro_with_Llama_3.1_8B_Instruct_v1 | hazyresearch | 2025-06-24T04:27:17Z | 22 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2506.18203",
"region:us"
] | [] | 2025-05-29T01:54:10Z | null | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: answer_index
dtype: int64
- name: category
dtype: string
- name: src
dtype: string
- name: instruction
dtype: string
- name: subject
dtype: string
- name: samples
sequence: string
- name: extracted_answers
sequence: string
- name: answer_correct
sequence: bool
- name: GRMGemma_scores
sequence: float64
- name: GRM_scores
sequence: float64
- name: Skyworks_scores
sequence: float64
- name: URM_scores
sequence: float64
- name: GPM_scores
sequence: float64
- name: GRMLlama32_scores
sequence: float64
- name: OffsetBias_scores
sequence: float64
- name: ArmorRM_scores
sequence: float64
- name: QwenPRM_min_scores
sequence: float64
- name: QwenPRM_max_scores
sequence: float64
- name: QwenPRM_avg_scores
sequence: float64
- name: EurusPRMStage1_min_scores
sequence: float64
- name: EurusPRMStage1_max_scores
sequence: float64
- name: EurusPRMStage1_avg_scores
sequence: float64
- name: EurusPRMStage2_min_scores
sequence: float64
- name: EurusPRMStage2_max_scores
sequence: float64
- name: EurusPRMStage2_avg_scores
sequence: float64
- name: QRM_scores
sequence: float64
- name: InternLM2Reward7B_scores
sequence: float64
- name: weaver_scores
sequence: float64
splits:
- name: data
num_bytes: 321458078
num_examples: 500
download_size: 58438362
dataset_size: 321458078
configs:
- config_name: default
data_files:
- split: data
path: data/data-*
license: mit
---
# MMLU-Pro with Llama-3.1-8B-Instruct
This dataset contains 500 multiple-choice questions from the MMLU-Pro benchmark with 100 candidate responses generated by Llama-3.1-8B-Instruct for each problem. Each response has been evaluated for correctness using a mixture of GPT-4o-mini and procedural Python code to robustly parse different answer formats, and scored by multiple reward models (scalar values) and LM judges (boolean verdicts).
## Dataset Structure
- **Split**: Single split named `"data"`
- **Num rows**: 500 MMLU-Pro questions
- **Generations per query**: 100
### Key Fields
| Field | Type | Description |
|-------|------|-------------|
| `instruction` | `str` | Prompt given to Llama 3.1 8B Instruct |
| `samples` | `List[str]` | Model-generated answers (100 per problem) |
| `extracted_answers` | `List[str]` | Final answers extracted from completions (A, B, C, D, etc.) |
| `answer_correct` | `List[bool]` | Whether each extracted answer matches the correct choice |
| `*_verdicts` | `Dict[str, List[float]]` | Binary signals from verifier models (e.g., LM judges) |
| `*_scores` | `Dict[str, List[float]]` | Scalar scores from reward models |
## Example Entry
```json
{
"instruction": "The following is a multiple choice question. Answer with the letter of the correct choice.\n\nQuestion: Which of the following best describes quantum entanglement?\nA. Particles moving at light speed\nB. Correlated quantum states\nC. Energy conservation\nD. Wave-particle duality\n\nAnswer:",
"samples": ["Quantum entanglement refers to...", "The answer is B", "I think the answer is B", ...],
"extracted_answers": ["B", "B", "A", ...],
"answer_correct": [true, true, false, ...],
"Llama-3.3-70B-Instruct_verdicts": [1.0, 1.0, 0.0, ...],
"GRMGemma_scores": [0.85, 0.79, 0.42, ...],
...
}
```
## Quick Start
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("hazyresearch/MMLU-Pro_with_Llama_3.1_8B_Instruct_v1")["data"]
# Get the first problem
problem = dataset[0]
print(f"Problem: {problem['instruction']}")
# Select the best response using pre-computed Weaver scores
best_idx = max(range(len(problem['weaver_scores'])), key=lambda i: problem['weaver_scores'][i])
best_response = problem['samples'][best_idx]
print(f"\nBest response (Weaver): {best_response}")
# Check if it's actually correct
print(f"Is correct: {problem['answer_correct'][best_idx]}")
```
## Source
Original MMLU-Pro problems from [TIGER-Lab/MMLU-Pro](https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro).
## Usage with Weaver
This dataset can be used with the [Weaver framework](https://github.com/HazyResearch/scaling-verification) for training and evaluating verifier aggregation methods. See the repository for detailed instructions on reproducing paper results.
## Citation
```bibtex
@misc{saadfalcon2025shrinkinggenerationverificationgapweak,
title={Shrinking the Generation-Verification Gap with Weak Verifiers},
author={Jon Saad-Falcon and E. Kelly Buchanan and Mayee F. Chen and Tzu-Heng Huang and Brendan McLaughlin and Tanvir Bhathal and Shang Zhu and Ben Athiwaratkun and Frederic Sala and Scott Linderman and Azalia Mirhoseini and Christopher Ré},
year={2025},
eprint={2506.18203},
archivePrefix={arXiv},
primaryClass={cs.CR},
url={https://arxiv.org/abs/2506.18203},
}
``` |
FlagEval/EmbodiedVerse-Bench | FlagEval | 2025-06-24T02:59:04Z | 224 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-18T07:53:50Z | null | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: raw_question_id
dtype: string
- name: level-1
dtype: string
- name: level-2
dtype: string
- name: question
dtype: string
- name: question_type
dtype: string
- name: img_path
sequence: image
- name: video_path
sequence: video
- name: mask_path
sequence: image
- name: answer
dtype: string
- name: options
sequence: string
- name: source
dtype: string
splits:
- name: open
num_bytes: 4895265213.102
num_examples: 2042
download_size: 1562859079
dataset_size: 4895265213.102
configs:
- config_name: default
data_files:
- split: open
path: data/open-*
---
EmbodiedVerse-Bench is a meta-dataset composed of 10 datasets for comprehensively evaluating models in embodied intelligence scenarios, including:
1. Where2Place : The dataset is a collection of 100 real-world images from diverse cluttered environments, each annotated with a sentence describing a desired free space and a corresponding mask, designed to evaluate free space referencing using spatial relations.
2. Blink : Including some visual problems that can be easily solved by humans, EmbodiedVerse samples categories related to spatial understanding (Counting, Relative_Depth, Spatial_Relation, Multi-view_Reasoning, Visual_Correspondence).
3. CVBench : A vision-centric benchmarks, containing 2638 manually-inspected examples.
4. RoboSpatial-Home : A new spatial reasoning benchmark designed to evaluate vision-language models (VLMs) in real-world indoor environments for robotics.
5. EmbspatialBench : A benchmark for evaluating embodied spatial understanding of LVLM. The benchmark is automatically derived from embodied scenes and covers 6 spatial relationships from an egocentric perspective.
6. All-Angles Bench : A Benchmark for Multi-View Understanding, including over 2,100 human-annotated multi-view QA pairs across 90 real-world scenes.
7. VSI-Bench : A video-based benchmark dataset constructs questions from egocentric-view videos of real indoor scenes, aiming to evaluate the visual-spatial intelligence of multimodal large models. EmbodiedVerse uses a tiny subset containing 400 questions.
8. SAT : A challenging real-image dynamic spatial benchmark.
9. EgoPlan-Bench2 : A benchmark which encompasses everyday tasks spanning4 major domains and 24 detailed scenarios, closely aligned with human daily life.
10. ERQA : This evaluation benchmark covers a variety of topics related to spatial reasoning and world knowledge focused on real-world scenarios, particularly in the context of robotics.
Please note: The images for the EgoPlan-Bench2 and All-Angles-Bench datasets are extracted from Ego4D videos. Due to licensing requirements, they are not provided directly here. You must obtain a license and download them yourself from the official Ego4D website: https://ego4d-data.org/#download
|
CatBarks/merged_prompt_injection_dataset | CatBarks | 2025-06-24T02:44:31Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T02:43:47Z | null | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Label
dtype: int64
- name: Source
dtype: string
- name: Index
dtype: int64
splits:
- name: train
num_bytes: 1094382261
num_examples: 1467947
download_size: 328216517
dataset_size: 1094382261
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
whynlp/gsm8k-aug | whynlp | 2025-06-24T01:53:18Z | 4 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2405.14838",
"arxiv:2506.18582",
"region:us"
] | [] | 2025-06-22T02:20:52Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: steps
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 92353643
num_examples: 385620
- name: validation
num_bytes: 150156
num_examples: 500
- name: test
num_bytes: 406195
num_examples: 1319
download_size: 50318247
dataset_size: 92909994
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# GSM8K-AUG
This dataset is an augmented version of the [GSM8K](https://huggingface.co/datasets/openai/gsm8k) dataset. It extends the original GSM8K training set to 385k samples by prompting GPT-4. The dataset was originally proposed in paper "[From Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by Step](https://arxiv.org/pdf/2405.14838)".
## Usage
Load the dataset using the `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("whyNLP/gsm8k-aug")
print(dataset["train"][0])
# {'question': 'Out of 600 employees in a company, 30% got promoted while 10% received bonus. How many employees did not get either a promotion or a bonus?', 'steps': ['<<600*30/100=180>>', '<<600*10/100=60>>', '<<180+60=240>>', '<<600-240=360>>'], 'answer': '360'}
```
## The Augmentation Collection
There are two versions of the augmented dataset:
1. **GSM8K-AUG**: The augmented dataset with the steps as mathematical expressions only.
2. [GSM8K-AUG-NL](https://huggingface.co/datasets/whynlp/gsm8k-aug-nl): The augmented dataset with the steps as natural language sentences.
## Disclaimer
This dataset is literally the same as the one released by [CODI](https://huggingface.co/datasets/zen-E/GSM8k-Aug), but we use different format to facilitate the usage of the dataset in [our paper](https://arxiv.org/abs/2506.18582). When we started our project, there was no available source for this dataset in Hugging Face Hub.
|
stalaei/realmath_2025-2025-06 | stalaei | 2025-06-24T01:47:30Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T01:47:25Z | null | ---
dataset_info:
features:
- name: paper_link
dtype: string
- name: theorem
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: submission_date
dtype: string
splits:
- name: train
num_bytes: 229366830
num_examples: 485
download_size: 147283564
dataset_size: 229366830
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
theprint/MixedConversations-s5 | theprint | 2025-06-24T01:32:11Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T01:30:41Z | null | ---
license: apache-2.0
---
|
forecastingresearch/forecastbench-datasets | forecastingresearch | 2025-06-24T01:20:38Z | 523 | 1 | [
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2409.19839",
"region:us"
] | [] | 2025-03-03T14:37:26Z | null | ---
license: cc-by-sa-4.0
language:
- en
pretty_name: ForecastBench
---
[](https://iclr.cc/virtual/2025/poster/28507) [](https://arxiv.org/abs/2409.19839)
## ForecastBench Datasets
This repository contains the datasets produced by ForecastBench, a forecasting benchmark for
LLMs.
More info at [https://www.forecastbench.org](https://www.forecastbench.org/).
Code available at [https://github.com/forecastingresearch/forecastbench](https://github.com/forecastingresearch/forecastbench).
## License
The datasets in this repository are distributed under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/legalcode).
## Citation
```bibtex
@inproceedings{karger2025forecastbench,
title={ForecastBench: A Dynamic Benchmark of AI Forecasting Capabilities},
author={Ezra Karger and Houtan Bastani and Chen Yueh-Han and Zachary Jacobs and Danny Halawi and Fred Zhang and Philip E. Tetlock},
year={2025},
booktitle={International Conference on Learning Representations (ICLR)},
url={https://iclr.cc/virtual/2025/poster/28507}
}
```
|
temp-enpaiva/temp-280525-oasst2_es | temp-enpaiva | 2025-06-24T01:02:41Z | 228 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-29T02:03:40Z | null | ---
license: apache-2.0
configs:
- config_name: castellano
data_files:
- split: train
path: castellano/train-*
- config_name: jopara
data_files:
- split: train
path: jopara/train-*
dataset_info:
- config_name: castellano
features:
- name: conversation_id
dtype: int64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 11485286
num_examples: 3151
download_size: 4465385
dataset_size: 11485286
- config_name: jopara
features:
- name: conversation_id
dtype: int64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 9092288
num_examples: 2430
download_size: 3610215
dataset_size: 9092288
---
|
AntResearchNLP/ViLaSR-data | AntResearchNLP | 2025-06-24T00:54:08Z | 87 | 0 | [
"language:en",
"arxiv:2506.09965",
"region:us"
] | [] | 2025-06-14T09:05:02Z | null | ---
datasets:
- AntResearchNLP/ViLaSR-data
language:
- en
---
This repository provides the **ViLaSR-data** introduced in the paper:
**[Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing](https://arxiv.org/abs/2506.09965)**.
## Dataset Overview
The dataset consists of three main components:
- **VILASR-ColdStart-33k**: Initial data generated from cold-start prompts (`cold_start_path*.zip`)
- **VILASR-RRS-8k**: Data refined using Reflective Rejection Sampling (`reflective_rejection_sampling_part*.zip`)
- **VILASR-RL-40k**: Reinforcement Learning enhanced data (`rl_part*.zip`)
For more details on data processing and usage, please refer to the accompanying code repository: https://github.com/AntResearchNLP/ViLaSR
To extract the zip files, run:
```bash
python unzip.py
```
```
@misc{wu2025reinforcingspatialreasoningvisionlanguage,
title={Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing},
author={Junfei Wu and Jian Guan and Kaituo Feng and Qiang Liu and Shu Wu and Liang Wang and Wei Wu and Tieniu Tan},
year={2025},
eprint={2506.09965},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.09965},
}
```
# License
- The **SR_91k** portion of this dataset is derived from the [RUBBISHLIKE/SpaceR-151k](https://huggingface.co/datasets/RUBBISHLIKE/SpaceR-151k) under the [CC BY-NC 4.0 License](https://creativecommons.org/licenses/by-nc/4.0/). |
asingh0/StudyChatA6 | asingh0 | 2025-06-24T00:34:08Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-23T17:40:54Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: response
dtype: string
- name: timestamp
dtype: timestamp[us]
- name: chatId
dtype: string
- name: userId
dtype: string
- name: interactionCount
dtype: int64
- name: llm_label
struct:
- name: justification
dtype: string
- name: label
dtype: string
- name: topic
dtype: string
- name: prompt
dtype: string
- name: label
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 26766962
num_examples: 889
download_size: 6670399
dataset_size: 26766962
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jacobmorrison/wildchat_perturbed_3000_replaced_no_keyword | jacobmorrison | 2025-06-24T00:15:21Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T00:14:18Z | null | ---
dataset_info:
features:
- name: conversation_hash
dtype: string
- name: model
dtype: string
- name: timestamp
dtype: timestamp[us, tz=UTC]
- name: conversation
list:
- name: content
dtype: string
- name: country
dtype: string
- name: hashed_ip
dtype: string
- name: header
struct:
- name: accept-language
dtype: string
- name: user-agent
dtype: string
- name: language
dtype: string
- name: redacted
dtype: bool
- name: role
dtype: string
- name: state
dtype: string
- name: timestamp
dtype: timestamp[us, tz=UTC]
- name: toxic
dtype: bool
- name: turn_identifier
dtype: int64
- name: turn
dtype: int64
- name: language
dtype: string
- name: openai_moderation
list:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: harassment_threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: hate_threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: self_harm
dtype: bool
- name: self_harm_instructions
dtype: bool
- name: self_harm_intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: sexual_minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: violence_graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: harassment_threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: hate_threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: self_harm
dtype: float64
- name: self_harm_instructions
dtype: float64
- name: self_harm_intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: sexual_minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: violence_graphic
dtype: float64
- name: flagged
dtype: bool
- name: detoxify_moderation
list:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: toxic
dtype: bool
- name: redacted
dtype: bool
- name: state
dtype: string
- name: country
dtype: string
- name: hashed_ip
dtype: string
- name: header
struct:
- name: accept-language
dtype: string
- name: user-agent
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: unique_id
dtype: int64
splits:
- name: train
num_bytes: 2381492020
num_examples: 100000
download_size: 1260014903
dataset_size: 2381492020
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ToxicityPrompts/PolyGuardMix | ToxicityPrompts | 2025-06-23T21:39:12Z | 229 | 1 | [
"task_categories:text2text-generation",
"language:ar",
"language:zh",
"language:cs",
"language:nl",
"language:en",
"language:fr",
"language:de",
"language:hi",
"language:th",
"language:it",
"language:ja",
"language:ko",
"language:pl",
"language:pt",
"language:ru",
"language:es",
"language:sv",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.04377",
"region:us",
"safety",
"multilingual"
] | [
"text2text-generation"
] | 2025-02-18T06:58:07Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: prompt_harm_label
dtype: string
- name: response_refusal_label
dtype: string
- name: response_harm_label
dtype: string
- name: prompt_safety_categories
dtype: string
- name: response_safety_categories
dtype: string
- name: metadata
struct:
- name: language
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 3783037965
num_examples: 1910372
download_size: 2306303141
dataset_size: 3783037965
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text2text-generation
language:
- ar
- zh
- cs
- nl
- en
- fr
- de
- hi
- th
- it
- ja
- ko
- pl
- pt
- ru
- es
- sv
tags:
- safety
- multilingual
size_categories:
- 1M<n<10M
license: cc-by-4.0
---
# PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages
Abstract: Truly multilingual safety moderation efforts for Large Language Models (LLMs) have been hindered by a narrow focus on a small set of languages (e.g., English, Chinese) as well as a limited scope of safety definition, resulting in significant gaps in moderation capabilities. To bridge these gaps, we release PolyGuard, a new state-of-the-art multilingual safety model for safeguarding LLM generations, and the corresponding training and evaluation datasets. PolyGuard is trained on PolyGuardMix, the largest multilingual safety training corpus to date containing 1.91M samples across 17 languages (e.g., Chinese, Czech, English, Hindi). We also introduce PolyGuardPrompts, a high quality multilingual benchmark with 29K samples for the evaluation of safety guardrails. Created by combining naturally occurring multilingual human-LLM interactions and human-verified machine translations of an English-only safety dataset (WildGuardMix; Han et al., 2024), our datasets contain prompt-output pairs with labels of prompt harmfulness, response harmfulness, and response refusal. Through extensive evaluations across multiple safety and toxicity benchmarks, we demonstrate that PolyGuard outperforms existing state-of-the-art open-weight and commercial safety classifiers by 5.5%. Our contributions advance efforts toward safer multilingual LLMs for all global users.
### Languages
The data supports 17 languages and are reported in the table below.
| language code | language name |
|:----------------|:---------------------|
| ar | Arabic |
| cs | Czech |
| de | German |
| en | English |
| es | Spanish |
| hi | Hindi |
| it | Italian |
| ja | Japanese |
| ko | Korean |
| nl | Dutch |
| pl | Polish |
| pt | Portuguese |
| ru | Russian |
| sv | Swedish |
| zh | Chinese |
| th | Thai |
### Data Fields
- `prompt`: user prompt input by user
- `response`: model's response to the user prompt
- `prompt_harm_label`: if the prompt is harmful
- `response_refusal_label`: if the model refuses the user's request
- `response_harm_label`: if the response is harmful
- `prompt_safety_categories`: list of violated safety categories by harmful prompt
- `response_safety_categories`: list of violated safety categories by harmful response
- `metadata`: language and source of data sample
### Citation
```
@misc{kumar2025polyguardmultilingualsafetymoderation,
title={PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages},
author={Priyanshu Kumar and Devansh Jain and Akhila Yerukola and Liwei Jiang and Himanshu Beniwal and Thomas Hartvigsen and Maarten Sap},
year={2025},
eprint={2504.04377},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.04377},
}
``` |
lucadang/qwen2.7-72B-Sudoku | lucadang | 2025-06-23T21:29:12Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-23T09:35:06Z | null | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
- name: answer
dtype: string
- name: reward
dtype: float64
- name: task
dtype: string
- name: format_reward_func
dtype: float64
- name: check_answer_reward_func
dtype: float64
splits:
- name: train
num_bytes: 1495096
num_examples: 2000
download_size: 210166
dataset_size: 1495096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
svjack/Anime_Landscape_Wan_FusionX | svjack | 2025-06-23T20:26:14Z | 0 | 0 | [
"size_categories:n<1K",
"modality:text",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-06-23T20:25:57Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path:
- "*.mp4"
- "metadata.csv"
---
|
Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,394