all_datasets / README.md
AdaptVis's picture
Improve dataset card: Add metadata, links, abstract, sample usage, and citation (#1)
bafe7aa verified
---
task_categories:
- image-text-to-text
language:
- en
tags:
- vlm
- spatial-reasoning
- attention
---
# Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas
This repository provides datasets associated with the paper [Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas](https://huggingface.co/papers/2503.01773).
Code: [https://github.com/shiqichen17/AdaptVis](https://github.com/shiqichen17/AdaptVis)
## Abstract
Large Vision Language Models (VLMs) have long struggled with spatial reasoning tasks. Surprisingly, even simple spatial reasoning tasks, such as recognizing "under" or "behind" relationships between only two objects, pose significant challenges for current VLMs. In this work, we study the spatial reasoning challenge from the lens of mechanistic interpretability, diving into the model's internal states to examine the interactions between image and text tokens. By tracing attention distribution over the image through out intermediate layers, we observe that successful spatial reasoning correlates strongly with the model's ability to align its attention distribution with actual object locations, particularly differing between familiar and unfamiliar spatial relationships. Motivated by these findings, we propose ADAPTVIS based on inference-time confidence scores to sharpen the attention on highly relevant regions when confident, while smoothing and broadening the attention window to consider a wider context when confidence is lower. This training-free decoding method shows significant improvement (e.g., up to a 50 absolute point improvement) on spatial reasoning benchmarks such as WhatsUp and VSR with negligible cost. We make code and data publicly available for research purposes at [https://github.com/shiqichen17/AdaptVis](https://github.com/shiqichen17/AdaptVis).
<p align="center">
<img src="https://github.com/shiqichen17/AdaptVis/blob/main/figures/main.png" width="800">
</p>
## Datasets
This repository provides the datasets used in the paper. The code to load and evaluate each dataset is available in `dataset_zoo/aro_datasets.py` in the GitHub repository. The Question and Answering data is located in `prompt/`.
The datasets are categorized for evaluating VLMs' performance on spatial reasoning tasks. They include:
* `COCO_one_obj`
* `COCO_two_obj`
* `Controlled_A`
* `Controlled_B`
* `VG_one_obj`
* `VG_two_obj`
## Sample Usage
### Load with Hugging Face `datasets`
You can easily load any configuration of this dataset using the `datasets` library:
```python
from datasets import load_dataset
# Load a specific configuration, e.g., 'COCO_one_obj'
dataset = load_dataset("AdaptVis/all_datasets", "COCO_one_obj")
# Access the test split
test_data = dataset["test"]
# Print an example
print(test_data[0])
```
### Running Experiments with the Codebase
To set up the environment and run experiments for `scaling_vis` and `adapt_vis` methods from the original repository, follow these steps:
**Setting Up the environment**
```bash
git clone https://github.com/shiqichen17/AdaptVis.git
cd AdaptVis
mkdir data
mkdir output
pip install -r requirements.txt
```
**Downloading the data**
The data can be downloaded automatically when running experiments by setting `--download=True` (while running `python main_aro.py` or instantiating the dataset directly). Alternatively, you can download it manually from the Hugging Face Hub (this repository) or the provided [Google Drive link](https://drive.google.com/drive/u/3/folders/164q6X9hrvP-QYpi3ioSnfMuyHpG5oRkZ) in the GitHub README.
**Running an example experiment**
You can quickly run an example experiment using the provided `run.sh` script:
```bash
bash run.sh
```
**Arguments**
The `run.sh` script accepts various arguments to control the dataset, model, and method:
| Argument | Example | Description |
|---|---|---|
| `dataset` | `Controlled_Images_A` | Specifies the dataset you want to evaluate. Can choose from `Controlled_Images_A, Controlled_Images_B..`. |
| `model` | `llava1.5` | Specifies the model you want to use. |
| `method` | `scaling_vis` | The method for evaluation. Can choose from `"scaling_vis"` or `"adapt_vis"`. |
| `weight` | `1.2` | Coefficient for Scaling_vis. Can set from `[0, 0.5, 0.8, 1.2, 1.5, 2.0]`. |
| `weight1` | `0.5` | Coefficient for AdaptVis. Can set from `[0.5, 0.8]`. |
| `weight2` | `1.2` | Coefficient for AdaptVis. Can set from `[1.2, 1.5, 2.0]`. |
| `threshold` | `0.3` | Threshold for AdaptVis. |
## Dataset Structure
The dataset contains multiple configurations, each with `id`, `question`, `answer`, and an image identifier (`image_id` or `image_path`). Below is a summary of the dataset splits and features:
```yaml
dataset_info:
- config_name: COCO_one_obj
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_id
dtype: int64
splits:
- name: test
num_bytes: 294466
num_examples: 2247
download_size: 32405
dataset_size: 294466
- config_name: COCO_two_obj
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_id
dtype: int64
splits:
- name: test
num_bytes: 63915
num_examples: 440
download_size: 12129
dataset_size: 63915
- config_name: Controlled_A
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_path
dtype: string
splits:
- name: test
num_bytes: 76379
num_examples: 412
download_size: 11149
dataset_size: 76379
- config_name: Controlled_B
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_path
dtype: string
splits:
- name: test
num_bytes: 75988
num_examples: 408
download_size: 10560
dataset_size: 75988
- config_name: VG_one_obj
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_id
dtype: string
splits:
- name: test
num_bytes: 172016
num_examples: 1160
download_size: 25361
dataset_size: 172016
- config_name: VG_two_obj
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_id
dtype: string
splits:
- name: test
num_bytes: 47464
num_examples: 291
download_size: 11402
dataset_size: 47464
configs:
- config_name: COCO_one_obj
data_files:
- split: test
path: COCO_one_obj/test-*
- config_name: COCO_two_obj
data_files:
- split: test
path: COCO_two_obj/test-*
- config_name: Controlled_A
data_files:
- split: test
path: Controlled_A/test-*
- config_name: Controlled_B
data_files:
- split: test
path: Controlled_B/test-*
- config_name: VG_one_obj
data_files:
- split: test
path: VG_one_obj/test-*
- config_name: VG_two_obj
data_files:
- split: test
path: VG_two_obj/test-*
```
## Citation
If you use this code or data, please consider citing our paper:
```bibtex
@misc{chen2025spatialreasoninghardvlms,
title={Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas},
author={Shiqi Chen and Tongyao Zhu and Ruochen Zhou and Jinghan Zhang and Siyang Gao and Juan Carlos Niebles and Mor Geva and Junxian He and Jiajun Wu and Manling Li},
year={2025},
eprint={2503.01773},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.01773},
}
```