Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: TypeError
Message: 'module' object is not callable
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2031, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2027, in from_yaml_inner
return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2016, in from_yaml_inner
Value(obj["dtype"])
File "<string>", line 5, in __init__
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 540, in __post_init__
self.pa_type = string_to_arrow(self.dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 147, in string_to_arrow
return pa.__dict__[datasets_dtype]()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'module' object is not callableNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
π§© Overview
INTERCHART is a multi-tier benchmark that evaluates how well vision-language models (VLMs) reason across multiple related charts, a crucial skill for real-world applications like scientific reports, financial analyses, and policy dashboards.
Unlike single-chart benchmarks, INTERCHART challenges models to integrate information across decomposed, synthetic, and real-world chart contexts.
Paper: INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
π Dataset Structure
INTERCHART/
βββ DECAF
β βββ combined # Multi-chart combined images (stitched)
β βββ original # Original compound charts
β βββ questions # QA pairs for decomposed single-variable charts
β βββ simple # Simplified decomposed charts
βββ SPECTRA
β βββ combined # Synthetic chart pairs (shared axes)
β βββ questions # QA pairs for correlated and independent reasoning
β βββ simple # Individual charts rendered from synthetic tables
βββ STORM
β βββ combined # Real-world chart pairs (stitched)
β βββ images # Original Our World in Data charts
β βββ meta-data # Extracted metadata and semantic pairings
β βββ questions # QA pairs for temporal, cross-domain reasoning
β βββ tables # Structured table representations (optional)
Each subset targets a different level of reasoning complexity and visual diversity.
π§ Subset Descriptions
1οΈβ£ DECAF β Decomposed Elementary Charts with Answerable Facts
- Focus: Factual lookup and comparative reasoning on simplified single-variable charts.
- Sources: Derived from ChartQA, ChartLlama, ChartInfo, DVQA.
- Content: 1,188 decomposed charts and 2,809 QA pairs.
- Tasks: Identify, compare, or extract values across clean, minimal visuals.
2οΈβ£ SPECTRA β Synthetic Plots for Event-based Correlated Trend Reasoning and Analysis
- Focus: Trend correlation and scenario-based inference between synthetic chart pairs.
- Construction: Generated via Gemini 1.5 Pro + human validation to preserve shared axes and realism.
- Content: 870 unique charts, 1,717 QA pairs across 333 contexts.
- Tasks: Analyze multi-variable relationships, infer trends, and reason about co-evolving variables.
3οΈβ£ STORM β Sequential Temporal Reasoning Over Real-world Multi-domain Charts
- Focus: Multi-step reasoning, temporal analysis, and semantic alignment across real-world charts.
- Source: Curated from Our World in Data with metadata-driven semantic pairing.
- Content: 648 charts across 324 validated contexts, 768 QA pairs.
- Tasks: Align mismatched domains, estimate ranges, and reason about evolving trends.
βοΈ Evaluation & Methodology
INTERCHART supports both visual and table-based evaluation modes.
Visual Inputs:
- Combined: Charts stitched into a unified image.
- Interleaved: Charts provided sequentially.
Structured Table Inputs:
Models can extract tables using tools like DePlot or Gemini Title Extraction, followed by table-based QA.Prompting Strategies:
- Zero-Shot
- Zero-Shot Chain-of-Thought (CoT)
- Few-Shot CoT with Directives (CoTD)
Evaluation Pipeline:
Multi-LLM semantic judging (Gemini 1.5 Flash, Phi-4, Qwen2.5) with majority voting to evaluate semantic correctness.
π Dataset Statistics
| Subset | Charts | Contexts | QA Pairs | Reasoning Type Examples |
|---|---|---|---|---|
| DECAF | 1,188 | 355 | 2,809 | Factual lookup, comparison |
| SPECTRA | 870 | 333 | 1,717 | Trend correlation, event reasoning |
| STORM | 648 | 324 | 768 | Temporal reasoning, abstract numerical inference |
| Total | 2,706 | 1,012 | 5,214 | β |
π Usage
π Access & Download Instructions
Use an access token as your Git credential when cloning or pushing to the repository.
- Install Git LFS
Download and install from https://git-lfs.com.
Then run:
git lfs install
- Clone the dataset repository
When prompted for a password, use your Hugging Face access token with write permissions.
You can generate one here: https://huggingface.co/settings/tokens
git clone [https://huggingface.co/datasets/interchart/Interchart](https://huggingface.co/datasets/interchart/Interchart)
- Clone without large files (LFS pointers only)
If you only want lightweight clones without downloading all image data:
GIT_LFS_SKIP_SMUDGE=1 git clone [https://huggingface.co/datasets/interchart/Interchart](https://huggingface.co/datasets/interchart/Interchart)
- Alternative: use the Hugging Face CLI Make sure the CLI is installed:
pip install -U "huggingface_hub[cli]"
Then download directly:
hf download interchart/Interchart --repo-type=dataset
π Citation
If you use this dataset, please cite:
@article{iyengar2025interchart,
title={INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information},
author={Anirudh Iyengar Kaniyar Narayana Iyengar and Srija Mukhopadhyay and Adnan Qidwai and Shubhankar Singh and Dan Roth and Vivek Gupta},
journal={arXiv preprint arXiv:2508.07630},
year={2025}
}
π Links
- π Paper: arXiv:2508.07630v1
- π Website: https://coral-lab-asu.github.io/interchart/
- π§ Explore Dataset: Interactive Evaluation Portal
- Downloads last month
- 12