RSCCM: Remote Sensing Change Captioning Model

Overview

RSCCM is a supervised full-tuning version of Qwen2.5-VL-7B-Instruct that specializes for remote sensing change captioning, which is trained on RSCC dataset. The training details is shown in our paper.

Installation

Follow Qwen2.5-VL official huggingface repo (see here).

pip install transformers accelerate # the latest stable version already integrate Qwen2.5-VL 
pip install qwen-vl-utils[decord]==0.0.8

Inference

For more implement details, refer to Qwen-VL official GitHub repo (see here).

  1. Load model (the same as Qwen2.5-VL)
from transformers import (
    Qwen2_5_VLForConditionalGeneration,
    AutoProcessor
)
import torch
model_id = "BiliSakura/RSCCM" 
model_path = model_id  # download from huggingface.co automatically or you can specify as path/to/your/model/folder
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    model_path,
    torch_dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",
).to("cuda")
processor = AutoProcessor.from_pretrained(model_path)
  1. Get image pairs
from PIL import image
pre_img_path = "path/to/pre/event/image"
post_img_path = "path/to/post/event/image"
text_prompt ="""
Give change description between two satellite images.
Output answer in a news style with a few sentences using precise phrases separated by commas.
"""
pre_image = Image.open(pre_img_path)
post_image = Image.open(post_img_path)
  1. Inference
from qwen_vl_utils import process_vision_info
import torch
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": pre_image},
            {"type": "image", "image": post_image},
            {
                "type": "text",
                "text": text_prompt,
            },
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, _ = process_vision_info(messages)
inputs = processor(
    text=[text], images=image_inputs, padding=True, return_tensors="pt"
).to("cuda", torch.bfloat16)
# Generate captions for the input image pair
generated_ids = model.generate(
    **inputs,
    max_new_tokens=512,
    # temperature=TEMPERATURE
)
generated_ids_trimmed = [
    out_ids[len(in_ids) :]
    for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
captions = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False,
)
change_caption = captions[0]

πŸ“œ Citation

@article{rscc_chen_2025,
  title={RSCC: A Large-Scale Remote Sensing Change Caption Dataset for Disaster Events},
  author={Zhenyuan Chen},
  year={2025},
  howpublished={\url{https://github.com/Bili-Sakura/RSCC}}
}
@article{qwen2.5vl, 
title={Qwen2.5-VL Technical Report}, 
url={http://arxiv.org/abs/2502.13923}, 
DOI={10.48550/arXiv.2502.13923}, 
author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang}, 
year={2025}, 
month=feb 
}
Downloads last month
5
Safetensors
Model size
8.29B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support