File size: 7,652 Bytes
d4eb18e
bafe7aa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4eb18e
9f98b92
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
464d3ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
042bd28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae86b47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ac6cddf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4eb18e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f98b92
 
 
 
464d3ff
 
 
 
042bd28
 
 
 
ae86b47
 
 
 
ac6cddf
 
 
 
d4eb18e
 
 
 
bafe7aa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
---
task_categories:
- image-text-to-text
language:
- en
tags:
- vlm
- spatial-reasoning
- attention
---

# Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas

This repository provides datasets associated with the paper [Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas](https://huggingface.co/papers/2503.01773).

Code: [https://github.com/shiqichen17/AdaptVis](https://github.com/shiqichen17/AdaptVis)

## Abstract
Large Vision Language Models (VLMs) have long struggled with spatial reasoning tasks. Surprisingly, even simple spatial reasoning tasks, such as recognizing "under" or "behind" relationships between only two objects, pose significant challenges for current VLMs. In this work, we study the spatial reasoning challenge from the lens of mechanistic interpretability, diving into the model's internal states to examine the interactions between image and text tokens. By tracing attention distribution over the image through out intermediate layers, we observe that successful spatial reasoning correlates strongly with the model's ability to align its attention distribution with actual object locations, particularly differing between familiar and unfamiliar spatial relationships. Motivated by these findings, we propose ADAPTVIS based on inference-time confidence scores to sharpen the attention on highly relevant regions when confident, while smoothing and broadening the attention window to consider a wider context when confidence is lower. This training-free decoding method shows significant improvement (e.g., up to a 50 absolute point improvement) on spatial reasoning benchmarks such as WhatsUp and VSR with negligible cost. We make code and data publicly available for research purposes at [https://github.com/shiqichen17/AdaptVis](https://github.com/shiqichen17/AdaptVis).

<p align="center">
<img src="https://github.com/shiqichen17/AdaptVis/blob/main/figures/main.png" width="800">
</p>

## Datasets
This repository provides the datasets used in the paper. The code to load and evaluate each dataset is available in `dataset_zoo/aro_datasets.py` in the GitHub repository. The Question and Answering data is located in `prompt/`.

The datasets are categorized for evaluating VLMs' performance on spatial reasoning tasks. They include:
*   `COCO_one_obj`
*   `COCO_two_obj`
*   `Controlled_A`
*   `Controlled_B`
*   `VG_one_obj`
*   `VG_two_obj`

## Sample Usage

### Load with Hugging Face `datasets`
You can easily load any configuration of this dataset using the `datasets` library:

```python
from datasets import load_dataset

# Load a specific configuration, e.g., 'COCO_one_obj'
dataset = load_dataset("AdaptVis/all_datasets", "COCO_one_obj")

# Access the test split
test_data = dataset["test"]

# Print an example
print(test_data[0])
```

### Running Experiments with the Codebase
To set up the environment and run experiments for `scaling_vis` and `adapt_vis` methods from the original repository, follow these steps:

**Setting Up the environment**

```bash
git clone https://github.com/shiqichen17/AdaptVis.git
cd AdaptVis
mkdir data
mkdir output
pip install -r requirements.txt
```

**Downloading the data**

The data can be downloaded automatically when running experiments by setting `--download=True` (while running `python main_aro.py` or instantiating the dataset directly). Alternatively, you can download it manually from the Hugging Face Hub (this repository) or the provided [Google Drive link](https://drive.google.com/drive/u/3/folders/164q6X9hrvP-QYpi3ioSnfMuyHpG5oRkZ) in the GitHub README.

**Running an example experiment**

You can quickly run an example experiment using the provided `run.sh` script:

```bash
bash run.sh
```

**Arguments**

The `run.sh` script accepts various arguments to control the dataset, model, and method:

| Argument | Example | Description |
|---|---|---|
| `dataset` | `Controlled_Images_A` | Specifies the dataset you want to evaluate. Can choose from `Controlled_Images_A, Controlled_Images_B..`. |
| `model` | `llava1.5` | Specifies the model you want to use. |
| `method` | `scaling_vis` | The method for evaluation. Can choose from `"scaling_vis"` or `"adapt_vis"`. |
| `weight` | `1.2` | Coefficient for Scaling_vis. Can set from `[0, 0.5, 0.8, 1.2, 1.5, 2.0]`. |
| `weight1` | `0.5` | Coefficient for AdaptVis. Can set from `[0.5, 0.8]`. |
| `weight2` | `1.2` | Coefficient for AdaptVis. Can set from `[1.2, 1.5, 2.0]`. |
| `threshold` | `0.3` | Threshold for AdaptVis. |

## Dataset Structure
The dataset contains multiple configurations, each with `id`, `question`, `answer`, and an image identifier (`image_id` or `image_path`). Below is a summary of the dataset splits and features:

```yaml
dataset_info:
- config_name: COCO_one_obj
  features:
  - name: id
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    sequence: string
  - name: image_id
    dtype: int64
  splits:
  - name: test
    num_bytes: 294466
    num_examples: 2247
  download_size: 32405
  dataset_size: 294466
- config_name: COCO_two_obj
  features:
  - name: id
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    sequence: string
  - name: image_id
    dtype: int64
  splits:
  - name: test
    num_bytes: 63915
    num_examples: 440
  download_size: 12129
  dataset_size: 63915
- config_name: Controlled_A
  features:
  - name: id
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    sequence: string
  - name: image_path
    dtype: string
  splits:
  - name: test
    num_bytes: 76379
    num_examples: 412
  download_size: 11149
  dataset_size: 76379
- config_name: Controlled_B
  features:
  - name: id
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    sequence: string
  - name: image_path
    dtype: string
  splits:
  - name: test
    num_bytes: 75988
    num_examples: 408
  download_size: 10560
  dataset_size: 75988
- config_name: VG_one_obj
  features:
  - name: id
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    sequence: string
  - name: image_id
    dtype: string
  splits:
  - name: test
    num_bytes: 172016
    num_examples: 1160
  download_size: 25361
  dataset_size: 172016
- config_name: VG_two_obj
  features:
  - name: id
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    sequence: string
  - name: image_id
    dtype: string
  splits:
  - name: test
    num_bytes: 47464
    num_examples: 291
  download_size: 11402
  dataset_size: 47464
configs:
- config_name: COCO_one_obj
  data_files:
  - split: test
    path: COCO_one_obj/test-*
- config_name: COCO_two_obj
  data_files:
  - split: test
    path: COCO_two_obj/test-*
- config_name: Controlled_A
  data_files:
  - split: test
    path: Controlled_A/test-*
- config_name: Controlled_B
  data_files:
  - split: test
    path: Controlled_B/test-*
- config_name: VG_one_obj
  data_files:
  - split: test
    path: VG_one_obj/test-*
- config_name: VG_two_obj
  data_files:
  - split: test
    path: VG_two_obj/test-*
```

## Citation
If you use this code or data, please consider citing our paper:
```bibtex
@misc{chen2025spatialreasoninghardvlms,
      title={Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas}, 
      author={Shiqi Chen and Tongyao Zhu and Ruochen Zhou and Jinghan Zhang and Siyang Gao and Juan Carlos Niebles and Mor Geva and Junxian He and Jiajun Wu and Manling Li},
      year={2025},
      eprint={2503.01773},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.01773}, 
}
```