task_categories:
- summarization
- text2text-generation
language:
- en
tags:
- code
size_categories:
- 10K<n<100K
Overview
This dataset contains Python code-docstring pairs, whereas the docstrings are in Google style. A Google style docstring is structured as follows:
<Description of the code>
Args:
<var1> (<data-type>) : <description of var1>
<var2> (<data_type>) : <description of var2>
Returns:
<var3> (<data-type>) : <description of var3>
Raises:
<var4> (<data-type>) : <description of var4>
The format varies widely (like additional sections such as Examples, Notes, etc) but generally speaking, it should contain an Args/Parameters and Returns section.
Source
The dataset was gathered from 3 different sources:
CodeSearchNet
From their Python split of ~250k samples, ~23k samples was extracted. A less than 10% sample retention, most samples from CodeSearchNet contained informal docstrings that only contained descriptions and no sections.
Repositories Under Google's GitHub Organization Page
You can find the specified page here here. These repos are dictated by the list:
repos = [
"https://github.com/google/python-fire",
"https://github.com/google/yapf",
"https://github.com/google/pytype",
"https://github.com/google/tf-quant-finance",
"https://github.com/google/budoux",
"https://github.com/google/mobly",
"https://github.com/google/temporian",
"https://github.com/google/pyglove",
"https://github.com/google/subpar",
"https://github.com/google/weather-tools",
"https://github.com/google/ci_edit",
"https://github.com/google/etils",
"https://github.com/google/pcbdl",
"https://github.com/google/starthinker",
"https://github.com/google/pytruth",
"https://github.com/google/nsscache",
"https://github.com/google/megalista",
"https://github.com/google/fhir-py",
"https://github.com/google/chatbase-python",
"https://github.com/tensorflow/tensorflow",
"https://github.com/google/project-OCEAN",
"https://github.com/google/qhbm-library",
"https://github.com/google/data-quality-monitor",
"https://github.com/google/genai-processors",
"https://github.com/google/python-proto-converter",
"https://github.com/google/sprockets",
"https://github.com/keras-team/keras",
"https://github.com/scikit-learn/scikit-learn",
"https://github.com/apache/beam",
"https://github.com/huggingface/transformers"
]
A total of ~11k samples was gathered from this source.
Juraj's Python Google-style Docstrings Dataset
I found this dataset here and is made my user Juraj-juraj. You can find the dataset here. A total of ~25k samples was gathered from this source, after further preprocessing.
Preprocessing Steps
The following cleaning, normalizing and preprocessing steps were performed:
- Removed duplicates based on both code and docstring
- Remove samples with empty code and docstrings
- Remove samples with extremely short entries (<20 chars)
- Remove samples with extremely long entries (>5000 chars)
- Removed comments and docstring from the code
- Removed samples where docstring isn't in English (using langdetect)
- Removed samples where docstring contained special characters like html tags or URLS
- Using CodeT5+ tokenizer, removed samples where docstring tokens are < 12 or > 256
- Normalized all docstring entries by removing any indentions
Data Structure
The data structure of the dataset is as follows:
<code> : <The code, removed of docstrings and comments>,
<docstring> : <The corresponding docstring of the code>,
<source> : <The source which the code came from>
The source is any of the following:
CodeSearchNet - from the CodeSearchNet dataset
github-repos - from the repositories under Google's Organization GitHub page
juraj-google-style - from Juraj's Python Google-style docstring dataset