Skip to content

[Remote Sensing 2026] Co-Training Vision Language Models for Remote Sensing Multi-task Learning

Notifications You must be signed in to change notification settings

VisionXLab/RSCoVLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RSCoVLM: Co-Training Vision Language Models for Remote Sensing Multi-task Learning

Qingyun Li*Shuran Ma*Junwei Luo*Yi Yu*Yue ZhouFengxiang WangXudong LuXiaoxing WangXin HeYushi ChenXue Yang

If you find our work helpful, please consider giving us a ⭐!

This repo is a technical practice of using Remote Sensing data to Collaboratively train Large Vision Language Models and hosts the official implementation of the paper: Co-Training Vision Language Models for Remote Sensing Multi-task Learning.

The official RSCoVLM was based on Qwen2.5-VL, we also support a version based on Qwen3-VL, you can checkout qwen3 branch.

Abstract

fig_method

With Transformers achieving outstanding performance on individual remote sensing (RS) tasks, we are now approaching the realization of a unified model that excels across multiple tasks through multi-task learning (MTL). Compared to single-task approaches, MTL methods offer improved generalization, enhanced scalability, and greater practical applicability. Recently, vision language models (VLMs) have achieved promising results in RS image understanding, grounding, and ultra-high-resolution (UHR) image reasoning, respectively. Moreover, the unified text-based interface demonstrates significant potential for MTL. Hence, in this work, we present RSCoVLM, a simple yet flexible VLM baseline for RS MTL. Firstly, we create the data curation engine, including data acquisition, offline processing and integrating, as well as online loading and weighting. This data engine effectively addresses complex RS data enviroment and generates flexible vision-language conversations. Furthermore, we propose a unified dynamic-resolution strategy to address the diverse image scales inherent in RS imagery. For UHR images, we introduce the Zoom-in Chain mechanism together with its corresponding dataset, LRS-VQA-Zoom. The strategies are flexible and effectively mitigate the computational burdens. Additionally, we significantly enhance the model’s object detection capability and propose a novel evaluation protocol that ensures fair comparison between VLMs and conventional detection models. Extensive experiments demonstrate that RSCoVLM achieves state-of-the-art performance across diverse tasks, outperforming existing RS VLMs and even rivaling specialized expert models. All the training and evaluating tools, model weights, and datasets have been fully open-sourced to support reproducibility. We expect that this baseline will promote further progress toward general-purpose RS models.

Get Started

First, refer to Enviroment.md to prepare an enviroment.

For training rscovlm, firstly refer to Data.md to prepare/download the data.

NOTE: We support multi-nodes distributed training based on torchrun. If your resource platform is different and requires multi-nodes distributed training, you may need adapt the shell scripts to your platform. Or you can mult the node count to gradient_accumulation_steps option. Concat us in issue for more support.

Practices

  • train RSCoVLM for multi-task learning
bash scripts/train_multitask_7b.sh
  • train RSCoVLM only for aerial detection
bash scripts/train_dota-poly-trainval512_3b.sh
bash scripts/train_dota-poly-trainval512_7b.sh

To be honest, the 3b model outperforms the 7b model. emmmm...

  • eval RSCoVLM for multiple benchmarks
bash scripts/eval.sh

Interface

Contact me in issue if there are questions.

Some options of the training script, you can see all in params.py:

  • datasets, image_folder, data_path: data config, see this instruction
  • model_id: provide the pretrained model path or name on huggingface hub.
  • max_length: max length for the language model.
  • min_pixels and max_pixels: to set the two parameters of Qwen image preprocessor. Note that it is the first version of min/max_pixels, not min/max patches.
  • prob_random_resize: to control the random resizing for dynamic resolution training.
  • prob_proxy_prompt: for the detection and grounding tasks, to control using diverse prompts.
  • prob_plain_text_prompt: for the detection and grounding tasks, to control using JSON format or using plain text format.
  • keep_empty_gt: for detection, to control whether to keep the samples that do not contain any objects in annotations. This is IMPORTANT.
  • ......

Some options of the training script, you can see all in params.py:

  • model_ckpt_path: checkpoint path, you can pass multiple ckpt
  • benchmarks: which benchmarks to evaluate
  • use_vllm: whether using vLLM to accelerate inferencing
  • save_path: folder to save eval log and result
  • eval_intermediate_checkpoints: whether to eval intermediate checkpoints
  • pass_evaluate: only inference and dump results, do not evaluate the result. (because inference requires gpu, while evaluate does not)
  • clip_num: when you want to get results fast or visualize the results, you can clip the dataset.
  • shuffle_seed: seed for clip dataset.
  • ......

Update to the latest version

We may update the codebase, the commit log is here.

If you have installed the previous version and would like to update to the latest version, you can:

cd RSCoVLM
git pull origin main  # if you have modification, commit your changes and merge the branches
pip install -e .

Contact and Acknowledge

Feel free to contact me through my email (21b905003@stu.hit.edu.cn) or github issue. I'll continue to maintain this repo.

The code is based on Transformers and MMRotate. Many modules refer to InternVL and LLaVA. The model architecture benefits from the open-source general-purpose vision-language model Qwen-VL series. Thanks for their brilliant works.

Thanks for the following valuable resource for training Qwen2.5-VL:

  • [sh | py]: A demo script fine-tuning Qwen2-VL using HuggingFace-datasets-style dataset (cauldron) and SFTTrainer in HuggingFace-TRL codebase.
  • [homepage | py]: A demo script fine-tuning Qwen2/2.5-VL using LLaVA-style dataset and Trainer in HuggingFace-transformers codebase.
  • EfficiencyCallback: A callback to track the efficiency of the training process. The tracked stats include: step time, memory, and throughput. It requires including --include_num_input_tokens_seen and logging_steps=1 in the training arguments.
  • Qwen2.5-VL official grounding cookbook: A notebook of visual grounding with qwen2.5-vl.

Many thanks to the Chinese WeChat article: 遥感与深度学习:《最新论文 | RSCoVLM: 哈工大等提出支持常规和UHR遥感图像的统一VLM, 多任务多分辨率表现优异! 数据代码开源!》 and 地球洞察《【2025-12-01 论文精读】哈工大等推出RSCoVLM!》. There are many high-quality Chinese articles about latest remote sensing papers in their channel.

Citation

If you find our paper or benchmark helpful for your research, please consider citing our paper and giving this repo a star ⭐. Thank you very much!

@ARTICLE{li2026rscovlm,
  author={Li, Qingyun and Ma, Shuran and Luo, Junwei and Yu, Yi and Zhou, Yue and Wang, Fengxiang and Lu, Xudong and Wang, Xiaoxing and He, Xin and Chen, Yushi and Yang, Xue},
  title={Co-Training Vision-Language Models for Remote Sensing Multi-Task Learning},
  journal={Remote Sensing},
  volume={18},
  year={2026},
  number={2},
  article-number={222},
  url={https://www.mdpi.com/2072-4292/18/2/222},
  issn={2072-4292},
  doi={10.3390/rs18020222}
}

@INPROCEEDINGS{li2025lmmrotate,
  author={Li, Qingyun and He, Xin and Shu, Xinya and Yu, Yi and Chen, Dong and Chen, Yushi and Yang, Xue},
  booktitle={IGARSS 2025 - 2025 IEEE International Geoscience and Remote Sensing Symposium}, 
  title={A Simple Aerial Detection Baseline of Multimodal Language Models}, 
  year={2025},
  pages={6833-6837},
  doi={10.1109/IGARSS55030.2025.11242725}
}

About

[Remote Sensing 2026] Co-Training Vision Language Models for Remote Sensing Multi-task Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published