Dataset Viewer
id
string | lang
string | label
string | description
string |
|---|---|---|---|
Q31
|
el
|
Βέλγιο
|
χώρα της δυτικής Ευρώπης
|
Q31
|
ay
|
Bilkiya
| null |
Q31
|
pnb
|
بیلجیئم
|
لہندا یورپ دا اک دیش
|
Q31
|
na
|
Berdjiyum
| null |
Q31
|
mk
|
Белгија
|
земја во Европа
|
Q31
|
bn
|
বেলজিয়াম
|
পশ্চিম ইউরোপের যুক্তরাষ্ট্রীয় সাংবিধানিক রাজতান্ত্রিক রাষ্ট্র
|
Q31
|
bpy
|
বেলজিয়াম
| null |
Q31
|
lt
|
Belgija
| null |
Q31
|
jam
|
Beljiom
| null |
Q31
|
sk
|
Belgicko
|
štát v Europe
|
Q31
|
so
|
Beljim
| null |
Q31
|
tl
|
Belgium
| null |
Q31
|
uk
|
Бельгія
|
країна в Західній Європі
|
Q31
|
tt
|
Бельгия
| null |
Q31
|
sc
|
Bèlgiu
| null |
Q31
|
bxr
|
Бельги
| null |
Q31
|
ff
|
Beljik
| null |
Q31
|
za
|
Bijliswz
| null |
Q31
|
yo
|
Bẹ́ljíọ̀m
| null |
Q31
|
ext
|
Bélgica
| null |
Q31
|
zea
|
Belhië
| null |
Q31
|
es
|
Bélgica
|
país de Europa occidental
|
Q31
|
or
|
ବେଲଜିଅମ
| null |
Q31
|
myv
|
Бельгия Мастор
| null |
Q31
|
yi
|
בעלגיע
| null |
Q31
|
ga
|
an Bheilg
| null |
Q31
|
csb
|
Belgijskô
| null |
Q31
|
gag
|
Belgiya
| null |
Q31
|
oc
|
Belgica
|
estat federal d'Euròpa Occidentala, membre de l'Union Europèa
|
Q31
|
pfl
|
Belgje
| null |
Q31
|
rw
|
Ububiligi
| null |
Q31
|
sco
|
Belgium
| null |
Q31
|
sgs
|
Belgėjė
| null |
Q31
|
ltg
|
Beļgeja
| null |
Q31
|
zh-sg
|
比利时
|
西欧国家
|
Q31
|
tet
|
Béljika
|
rai iha Europa
|
Q31
|
az
|
Belçika
|
Qərbi Avropada konstusiyalı monarxiya ölkə
|
Q31
|
mzn
|
بلژیک
| null |
Q31
|
zh-hk
|
比利時
|
西歐國家
|
Q31
|
bar
|
Bäigien
| null |
Q31
|
zh
|
比利時
|
西歐國家
|
Q31
|
nl
|
België
|
federale staat in West-Europa
|
Q31
|
lez
|
Бельгия
| null |
Q31
|
ast
|
Bélxica
| null |
Q31
|
bm
|
Bɛliziki
| null |
Q31
|
ta
|
பெல்ஜியம்
|
மேற்கு ஐரோப்பிய நாடு
|
Q31
|
jv
|
Bélgié
|
nagara ing Éropah
|
Q31
|
inh
|
Бельги
| null |
Q31
|
hi
|
बेल्जियम
|
यूरोप महाद्वीप में स्थित एक देश
|
Q31
|
co
|
Belgiu
|
statu di l'Auropa occidentale
|
Q31
|
mn
|
Бельги
| null |
Q31
|
pms
|
Belgi
| null |
Q31
|
om
|
Beeljiyeem
| null |
Q31
|
sl
|
Belgija
|
država v Evropi
|
Q31
|
crh
|
Belçika
| null |
Q31
|
ms
|
Belgium
| null |
Q31
|
ro
|
Belgia
|
țară din Europa de Vest
|
Q31
|
bho
|
बेल्जियम
|
पच्छिमी यूरोप में एगो संघीय संबैधानिक राजतंत्र
|
Q31
|
av
|
Бельгия
| null |
Q31
|
lb
|
Belsch
|
Staat a Westeuropa
|
Q31
|
en-gb
|
Belgium
|
country in Europe
|
Q31
|
kaa
|
Belgiya
|
Batıs Evropadaǵı mámleket
|
Q31
|
ps
|
بېلجيم
|
لوېديځي اروپا هېواد
|
Q31
|
stq
|
Belgien
| null |
Q31
|
aeb-arab
|
بلجيكيا
| null |
Q31
|
lo
|
ປະເທດແບນຊິກ
|
ລາຊະອານາຈັກໃນເອີຣົບຕາເວັນຕົກ
|
Q31
|
sw
|
Ubelgiji
|
nchi katika Ulaya Magharibi
|
Q31
|
nds
|
Belgien
|
Staat in Mitteleuropa
|
Q31
|
ksh
|
Belgien
| null |
Q31
|
bcl
|
Belhika
| null |
Q31
|
pag
|
Belhika
| null |
Q31
|
da
|
Belgien
|
europæisk land
|
Q31
|
ca
|
Bèlgica
|
país d'Europa
|
Q31
|
crh-latn
|
Belçika
| null |
Q31
|
lv
|
Beļģija
|
Valsts Eiropā
|
Q31
|
rup
|
Belghia
| null |
Q31
|
ady
|
Белгие
| null |
Q31
|
ty
|
Peretita
| null |
Q31
|
gv
|
y Velg
|
çheer ayns sheear yn Oarpey
|
Q31
|
kbp
|
Pɛliziki
| null |
Q31
|
pt-br
|
Bélgica
|
país da Europa
|
Q31
|
ar
|
بلجيكا
|
دولة في أوروبا الغربية
|
Q31
|
srn
|
Belgikondre
| null |
Q31
|
ch
|
Belgika
| null |
Q31
|
diq
|
Belçıka
| null |
Q31
|
pih
|
Beljum
| null |
Q31
|
szl
|
Belgijŏ
|
państwo we zachodnij Ojropie
|
Q31
|
ace
|
Bèlgia
| null |
Q31
|
cv
|
Бельги
| null |
Q31
|
pt
|
Bélgica
|
país da Europa
|
Q31
|
io
|
Belgia
| null |
Q31
|
zh-my
|
比利时
|
西欧国家
|
Q31
|
ru
|
Бельгия
|
государство в Западной Европе
|
Q31
|
tpi
|
Beljiam
| null |
Q31
|
th
|
ประเทศเบลเยียม
|
ราชอาณาจักรในยุโรปตะวันตก
|
Q31
|
ha
|
Beljik
| null |
Q31
|
mg
|
Belzika
| null |
Q31
|
azb
|
بلژیک
| null |
Q31
|
an
|
Belchica
|
Estau d'Europa
|
Q31
|
eml
|
Bélgi
| null |
End of preview. Expand
in Data Studio
Wikidata Multilingual Label Maps 2025
Comprehensive multilingual label and description maps extracted from the 2025 Wikidata dump.
This dataset contains labels and descriptions for Wikidata entities (Q-items and P-properties) across 613 languages.
Dataset Overview
- 📊 Total Records: 725,274,530 label/description pairs
- 🆔 Unique Entities: 117,229,348 (Q-items and P-properties)
- 🌍 Languages: 613 unique language codes
- 📝 With Descriptions: 339,691,043 pairs (46.8% coverage)
- 📦 File Size: ~9.97 GB compressed
Files
qid_labels_desc.parquet- columns:id,len(language code),label,des(description)- Q-items: 724,987,441 records (99.96%)
- P-properties: 287,089 records (0.04%)
Language Distribution
Top 20 Languages by Label Count
| Language | Count | Percentage |
|---|---|---|
| en (English) | 90,445,991 | 12.5% |
| nl (Dutch) | 56,171,796 | 7.7% |
| ast (Asturian) | 19,027,483 | 2.6% |
| fr (French) | 17,971,824 | 2.5% |
| de (German) | 17,094,083 | 2.4% |
| es (Spanish) | 16,174,147 | 2.2% |
| mul (Multiple) | 15,585,604 | 2.1% |
| it (Italian) | 11,831,736 | 1.6% |
| sq (Albanian) | 11,331,135 | 1.6% |
| ga (Irish) | 10,648,122 | 1.5% |
Note: The dataset includes 613 total languages, from major world languages to regional dialects and constructed languages.
Download Options
A) Hugging Face snapshot to a local folder
from huggingface_hub import snapshot_download
local_dir = snapshot_download(repo_id="yashkumaratri/wikidata-label-maps-2025-all-languages")
print(local_dir) # contains qid_labels_desc.parquet
B) Git LFS
git lfs install
git clone https://huggingface.co/datasets/yashkumaratri/wikidata-label-maps-2025-all-languages
Citation
If you find this dataset useful in your research or applications, please consider citing it:
@misc{atri2025wikidatamultilingual,
title = {Wikidata Multilingual Label Maps (2025 snapshot)},
author = {Yash Kumar Atri},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/yashkumaratri/wikidata-label-maps-2025-all-languages}},
note = {Multilingual Dataset on Hugging Face}
}
This dataset is part of ongoing work on large-scale dynamic knowledge resources; a broader benchmark and paper will be released later.
Usage Examples
Hugging Face datasets
from datasets import load_dataset
ds = load_dataset("yashkumaratri/wikidata-label-maps-2025-all-languages", split="train")
print(f"Dataset size: {ds.num_rows:,}")
print("Sample records:")
print(ds.select(range(5)).to_pandas())
# Filter for specific language
english_ds = ds.filter(lambda x: x["len"] == "en")
print(f"English records: {english_ds.num_rows:,}")
# Get entity with most languages
from collections import Counter
entity_counts = Counter(ds["id"])
most_multilingual = entity_counts.most_common(1)[0]
print(f"Most multilingual entity: {most_multilingual[0]} with {most_multilingual[1]} languages")
Polars
Basic usage:
import polars as pl
# Load the full dataset
df = pl.read_parquet("qid_labels_desc.parquet")
print(f"Total records: {df.height:,}")
print(f"Columns: {df.columns}")
print(df.head())
# Get all labels for a specific entity
entity_labels = df.filter(pl.col("id") == "Q31") # Belgium
print(entity_labels.head(10))
Language-specific queries:
import polars as pl
df = pl.scan_parquet("qid_labels_desc.parquet")
# Get English labels only
english_labels = df.filter(pl.col("len") == "en").collect()
print(f"English labels: {english_labels.height:,}")
# Get labels for multiple languages
multilang = df.filter(pl.col("len").is_in(["en", "fr", "de", "es"])).collect()
print(f"Major European languages: {multilang.height:,}")
# Language statistics
lang_stats = (
df.group_by("len")
.agg(pl.count().alias("count"))
.sort("count", descending=True)
.collect()
)
print("Top 10 languages:", lang_stats.head(10))
Entity resolution with preferred language fallback:
import polars as pl
def get_entity_label(entity_id: str, preferred_langs: list = ["en", "fr", "de"]) -> str:
"""Get entity label with language preference fallback"""
df = pl.scan_parquet("qid_labels_desc.parquet")
entity_labels = (
df.filter(pl.col("id") == entity_id)
.filter(pl.col("label").is_not_null())
.collect()
)
# Try preferred languages in order
for lang in preferred_langs:
label = entity_labels.filter(pl.col("len") == lang)
if label.height > 0:
return label.select("label").item()
# Fallback to any available label
if entity_labels.height > 0:
return entity_labels.select("label").item()
return entity_id # Return ID if no label found
# Example usage
print(get_entity_label("Q31", ["es", "en"])) # Try Spanish first, then English
pandas
import pandas as pd
# Load with specific columns for memory efficiency
df = pd.read_parquet("qid_labels_desc.parquet", columns=["id", "len", "label"])
# Basic statistics
print(f"Total records: {len(df):,}")
print(f"Unique entities: {df['id'].nunique():,}")
print(f"Unique languages: {df['len'].nunique():,}")
# Language distribution
lang_counts = df['len'].value_counts()
print("Top 10 languages:")
print(lang_counts.head(10))
# Get multilingual labels for specific entities
entities_of_interest = ["Q31", "Q142", "Q183"] # Belgium, France, Germany
multilingual = df[df['id'].isin(entities_of_interest)]
pivot_table = multilingual.pivot_table(
index='id',
columns='len',
values='label',
aggfunc='first'
)
print(pivot_table[['en', 'fr', 'de', 'es']].head())
DuckDB SQL
-- in duckdb shell or via Python duckdb.execute
INSTALL httpfs; LOAD httpfs;
PRAGMA threads=16;
-- Create view
CREATE VIEW multilingual_labels AS
SELECT * FROM parquet_scan('qid_labels_desc.parquet');
-- Language statistics
SELECT len as language, COUNT(*) as label_count,
ROUND(COUNT(*) * 100.0 / SUM(COUNT(*)) OVER (), 2) as percentage
FROM multilingual_labels
GROUP BY len
ORDER BY label_count DESC
LIMIT 20;
-- Find entities with most language coverage
SELECT id, COUNT(DISTINCT len) as num_languages,
STRING_AGG(DISTINCT len, ', ' ORDER BY len) as languages
FROM multilingual_labels
WHERE label IS NOT NULL
GROUP BY id
ORDER BY num_languages DESC
LIMIT 10;
-- Pivot table for specific entities across major languages
SELECT id,
MAX(CASE WHEN len = 'en' THEN label END) as english,
MAX(CASE WHEN len = 'fr' THEN label END) as french,
MAX(CASE WHEN len = 'de' THEN label END) as german,
MAX(CASE WHEN len = 'es' THEN label END) as spanish,
MAX(CASE WHEN len = 'zh' THEN label END) as chinese
FROM multilingual_labels
WHERE id IN ('Q31', 'Q142', 'Q183', 'Q148')
GROUP BY id;
PySpark
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, count, desc
spark = SparkSession.builder.getOrCreate()
df = spark.read.parquet("qid_labels_desc.parquet")
# Basic statistics
print(f"Total records: {df.count():,}")
df.select("len").distinct().count() # Number of languages
# Language distribution
lang_dist = (
df.groupBy("len")
.agg(count("*").alias("count"))
.orderBy(desc("count"))
)
lang_dist.show(20)
# Entity multilingual coverage
entity_coverage = (
df.filter(col("label").isNotNull())
.groupBy("id")
.agg(count("len").alias("num_languages"))
.orderBy(desc("num_languages"))
)
entity_coverage.show(10)
Fast Lookup Helpers
Multilingual dictionary maps
import polars as pl
from collections import defaultdict
# Load and create language-specific dictionaries
df = pl.read_parquet("qid_labels_desc.parquet", columns=["id", "len", "label"])
# Create nested dictionary: {language: {entity_id: label}}
MULTILANG_LABELS = defaultdict(dict)
for row in df.iter_rows():
entity_id, lang, label = row
if label: # Skip null labels
MULTILANG_LABELS[lang][entity_id] = label
# Usage examples
print(MULTILANG_LABELS["en"].get("Q31", "Unknown")) # English
print(MULTILANG_LABELS["fr"].get("Q31", "Unknown")) # French
print(MULTILANG_LABELS["zh"].get("Q31", "Unknown")) # Chinese
# Get all available languages for an entity
entity_id = "Q31"
available_langs = [lang for lang in MULTILANG_LABELS if entity_id in MULTILANG_LABELS[lang]]
print(f"Q31 available in: {len(available_langs)} languages")
Multilingual resolver class
import polars as pl
from typing import List, Optional, Dict
class MultilingualWDResolver:
def __init__(self, parquet_file: str):
self.df = pl.read_parquet(parquet_file)
self._build_indices()
def _build_indices(self):
"""Build language-specific lookup indices"""
self.lang_maps = {}
for lang in self.df.select("len").unique().to_series():
lang_df = self.df.filter(pl.col("len") == lang)
self.lang_maps[lang] = dict(zip(
lang_df.select("id").to_series(),
lang_df.select("label").to_series()
))
def get_label(self, entity_id: str, lang: str = "en") -> Optional[str]:
"""Get label for entity in specific language"""
return self.lang_maps.get(lang, {}).get(entity_id)
def get_multilingual_labels(self, entity_id: str, langs: List[str] = None) -> Dict[str, str]:
"""Get labels in multiple languages"""
if langs is None:
langs = ["en", "fr", "de", "es", "zh"]
result = {}
for lang in langs:
label = self.get_label(entity_id, lang)
if label:
result[lang] = label
return result
def get_best_label(self, entity_id: str, preferred_langs: List[str] = ["en", "fr", "de"]) -> str:
"""Get best available label with fallback"""
for lang in preferred_langs:
label = self.get_label(entity_id, lang)
if label:
return label
# Try any available language
for lang_map in self.lang_maps.values():
if entity_id in lang_map:
return lang_map[entity_id]
return entity_id # Fallback to ID
# Usage
resolver = MultilingualWDResolver("qid_labels_desc.parquet")
print(resolver.get_label("Q31", "en")) # English
print(resolver.get_multilingual_labels("Q31")) # Multiple languages
print(resolver.get_best_label("Q31", ["ja", "ko", "en"])) # With preference
Language Coverage Analysis
Find entities with best multilingual coverage
import polars as pl
def analyze_multilingual_coverage(parquet_file: str, top_n: int = 20):
df = pl.read_parquet(parquet_file)
# Entities with most language coverage
coverage = (
df.filter(pl.col("label").is_not_null())
.group_by("id")
.agg([
pl.count("len").alias("num_languages"),
pl.col("label").first().alias("sample_label")
])
.sort("num_languages", descending=True)
.limit(top_n)
)
print(f"Top {top_n} entities by language coverage:")
for row in coverage.iter_rows(named=True):
print(f" {row['id']}: {row['num_languages']} languages | {row['sample_label']}")
return coverage
analyze_multilingual_coverage("qid_labels_desc.parquet")
Language family analysis
import polars as pl
# Define language families (simplified)
LANGUAGE_FAMILIES = {
"Germanic": ["en", "de", "nl", "sv", "no", "da"],
"Romance": ["fr", "es", "it", "pt", "ro", "ca"],
"Slavic": ["ru", "pl", "cs", "sk", "uk", "bg"],
"East Asian": ["zh", "ja", "ko"],
"Arabic": ["ar", "fa", "ur"],
}
def analyze_by_language_family(parquet_file: str):
df = pl.read_parquet(parquet_file)
for family, languages in LANGUAGE_FAMILIES.items():
family_count = (
df.filter(pl.col("len").is_in(languages))
.filter(pl.col("label").is_not_null())
.height
)
print(f"{family}: {family_count:,} labels")
analyze_by_language_family("qid_labels_desc.parquet")
Use Cases
Cross-lingual Knowledge Graph Analysis
- Build multilingual knowledge graphs with proper labels
- Analyze entity coverage across different languages
- Create language-specific views of Wikidata
Machine Translation & NLP
- Training data for multilingual named entity recognition
- Cross-lingual entity linking datasets
- Evaluation of translation quality for entity names
Internationalization (i18n)
- Localize applications with proper entity names
- Build multilingual search interfaces
- Create region-specific content
Research Applications
- Study linguistic diversity in knowledge representation
- Analyze cultural bias in knowledge coverage
- Cross-cultural studies of entity naming patterns
Performance Tips
- Memory Management: Load only required columns (
columns=["id", "len", "label"]) - Language Filtering: Filter by language early to reduce data size
- Lazy Loading: Use
.scan_parquet()with Polars for large-scale processing - Indexing: Build language-specific dictionaries for frequent lookups
- Batch Processing: Process entities in batches for memory efficiency
Data Quality Notes
- Some labels may be missing (NULL values) for certain language-entity combinations
- Description coverage is 46.8% across all language-entity pairs
- The dataset includes both major world languages and regional/minority languages
- Language codes follow ISO standards but may include some Wikidata-specific codes
- Downloads last month
- 63