Datasets:
dataset_info:
features:
- name: task_id
dtype: string
- name: query
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 5184523
num_examples: 76
download_size: 1660815
dataset_size: 5184523
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: apache-2.0
language:
- en
- zh
- jp
- es
- el
tags:
- finance
- multilingual
pretty_name: PolyFiQA-Expert
size_categories:
- n<1K
task_categories:
- question-answering
Dataset Card for PolyFiQA-Expert
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://huggingface.co/collections/TheFinAI/multifinben-6826f6fc4bc13d8af4fab223
- Repository: https://huggingface.co/datasets/TheFinAI/polyfiqa-expert
- Paper: MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation
- Leaderboard: https://huggingface.co/spaces/TheFinAI/Open-FinLLM-Leaderboard
Dataset Summary
PolyFiQA-Expert is a multilingual financial question-answering dataset designed to evaluate expert-level financial reasoning in low-resource and multilingual settings. Each instance consists of a task identifier, a query prompt, an associated financial question, and the correct answer.The Expert split emphasizes complex, high-level financial understanding, requiring deeper domain knowledge and nuanced reasoning.
Supported Tasks and Leaderboards
- Tasks:
- question-answering
- Evaluation Metrics:
- ROUGE-1
Languages
- English (en)
- Chinese (zh)
- Japanese (jp)
- Spanish (es)
- Greek (el)
Dataset Structure
Data Instances
Each instance in the dataset contains:
task_id
: A unique identifier for the query-task pair.query
: A brief query statement from the financial domain.question
: The full question posed based on the query context.answer
: The correct answer string.
Data Fields
Field | Type | Description |
---|---|---|
task_id | string | Unique ID per task |
query | string | Financial query (short form) |
question | string | Full natural-language financial question |
answer | string | Ground-truth answer to the question |
Data Splits
Split | # Examples | Size (bytes) |
---|---|---|
test | 76 | 5,184,523 |
Dataset Creation
Curation Rationale
PolyFiQA-Expert was curated to probe the financial reasoning capabilities of large language models under expert-level scenarios
Source Data
Initial Data Collection
The source data was derived from a diverse collection of English financial reports. Questions were derived from real-world financial scenarios and manually adapted to fit a concise QA format.
Source Producers
Data was created by researchers and annotators with backgrounds in finance, NLP, and data curation.
Annotations
Annotation Process
Questions and answers were carefully authored and validated through a multi-round expert annotation process to ensure fidelity and depth.
Annotators
A team of finance researchers and data scientists.
Personal and Sensitive Information
The dataset contains no personal or sensitive information. All content is synthetic or anonymized for safe usage.
Considerations for Using the Data
Social Impact of Dataset
PolyFiQA-Expert contributes to research in financial NLP supports research in multilingual financial QA, with applications in risk analysis, regulatory auditing, and financial advising tools.
Discussion of Biases
- May over-represent English financial contexts.
- Questions emphasize clarity and answerability over real-world ambiguity.
Other Known Limitations
- Limited size (76 examples).
- Focused on expert questions; may not generalize to complex reasoning tasks.
Additional Information
Dataset Curators
- The FinAI Team
Licensing Information
- License: Apache License 2.0
Citation Information
If you use this dataset, please cite:
@misc{peng2025multifinbenmultilingualmultimodaldifficultyaware,
title={MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation},
author={Xueqing Peng and Lingfei Qian and Yan Wang and Ruoyu Xiang and Yueru He and Yang Ren and Mingyang Jiang and Jeff Zhao and Huan He and Yi Han and Yun Feng and Yuechen Jiang and Yupeng Cao and Haohang Li and Yangyang Yu and Xiaoyu Wang and Penglei Gao and Shengyuan Lin and Keyi Wang and Shanshan Yang and Yilun Zhao and Zhiwei Liu and Peng Lu and Jerry Huang and Suyuchen Wang and Triantafillos Papadopoulos and Polydoros Giannouris and Efstathia Soufleri and Nuo Chen and Guojun Xiong and Zhiyang Deng and Yijia Zhao and Mingquan Lin and Meikang Qiu and Kaleb E Smith and Arman Cohan and Xiao-Yang Liu and Jimin Huang and Alejandro Lopez-Lira and Xi Chen and Junichi Tsujii and Jian-Yun Nie and Sophia Ananiadou and Qianqian Xie},
year={2025},
eprint={2506.14028},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.14028},
}