Update README.md
Browse files
README.md
CHANGED
@@ -31,4 +31,50 @@ configs:
|
|
31 |
path: data/train-*
|
32 |
- split: validation
|
33 |
path: data/validation-*
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
path: data/train-*
|
32 |
- split: validation
|
33 |
path: data/validation-*
|
34 |
+
task_categories:
|
35 |
+
- question-answering
|
36 |
+
language:
|
37 |
+
- en
|
38 |
+
size_categories:
|
39 |
+
- 100K<n<1M
|
40 |
---
|
41 |
+
|
42 |
+
## Clean SQuAD Classic v2
|
43 |
+
|
44 |
+
This is a refined version of the [SQuAD v2](https://huggingface.co/datasets/rajpurkar/squad_v2) dataset. It has been preprocessed to ensure higher data quality and usability for NLP tasks such as Question Answering.
|
45 |
+
|
46 |
+
## Description
|
47 |
+
|
48 |
+
The **Clean SQuAD Classic v2** dataset was created by applying preprocessing steps to the original SQuAD v2 dataset, including:
|
49 |
+
- **Trimming whitespace**: All leading and trailing spaces have been removed from the `question` field.
|
50 |
+
- **Minimum question length**: Questions with fewer than 12 characters were filtered out to remove overly short or uninformative entries.
|
51 |
+
|
52 |
+
Unlike the [Clean SQuAD v2](https://huggingface.co/datasets/decodingchris/clean_squad_v2) dataset, this dataset does not contain a separate test split. It retains the classic two-way split of **train** and **validation**, following the traditional structure of the original SQuAD v2 dataset.
|
53 |
+
|
54 |
+
## Dataset Structure
|
55 |
+
|
56 |
+
The dataset is divided into two subsets:
|
57 |
+
|
58 |
+
1. **Train**: The primary dataset for model training.
|
59 |
+
2. **Validation**: A dataset for hyperparameter tuning and model validation.
|
60 |
+
|
61 |
+
## Data Fields
|
62 |
+
|
63 |
+
Each subset contains the following fields:
|
64 |
+
- `id`: Unique identifier for each question-context pair.
|
65 |
+
- `title`: Title of the article the context is derived from.
|
66 |
+
- `context`: Paragraph from which the answer is extracted.
|
67 |
+
- `question`: Preprocessed question string.
|
68 |
+
- `answers`: Dictionary containing:
|
69 |
+
- `text`: The text of the correct answer(s), if available. Empty for unanswerable questions.
|
70 |
+
- `answer_start`: Character-level start position of the answer in the context, if available.
|
71 |
+
|
72 |
+
## Usage
|
73 |
+
|
74 |
+
The dataset is hosted on the Hugging Face Hub and can be loaded with the following code:
|
75 |
+
|
76 |
+
```python
|
77 |
+
from datasets import load_dataset
|
78 |
+
|
79 |
+
dataset = load_dataset("decodingchris/clean_squad_classic_v2")
|
80 |
+
```
|