Learning Trajectory-Aware Multimodal Large Language Models for Video Reasoning Segmentation

πŸ“… 2026-03-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of effectively modeling dynamic object trajectories in video referring segmentation, a task hindered by existing methods’ reliance on unidirectional and implicit text-trajectory alignment. To overcome this limitation, we propose TrajSeg, a novel framework that explicitly introduces a bidirectional text-trajectory alignment mechanism into multimodal large language models, thereby enhancing their perception of object motion trajectories in videos. Furthermore, TrajSeg incorporates a frame-level content integration module and a unified mask decoder, enabling an end-to-end trainable architecture for full-frame inference and segmentation. Extensive experiments demonstrate that TrajSeg consistently outperforms current state-of-the-art methods across multiple referring and reasoning-based video segmentation benchmarks, achieving superior performance on all evaluation metrics.

Technology Category

Application Category

πŸ“ Abstract
The prosperity of Multimodal Large Language Models (MLLMs) has stimulated the demand for video reasoning segmentation, which aims to segment video objects based on human instructions. Previous studies rely on unidirectional and implicit text-trajectory alignment, which struggles with trajectory perception when faced with severe video dynamics. In this work, we propose TrajSeg, a simple and unified framework built upon MLLMs. Concretely, we introduce bidirectional text-trajectory alignment, where MLLMs accept grounding-intended (text-to-trajectory) and captioning-intended (trajectory-to-text) instructions. This way, MLLMs can benefit from enhanced correspondence and better perceive object trajectories in videos. The mask generation from trajectories is achieved via a frame-level content integration (FCI) module and a unified mask decoder. The former adapts the MLLM-parsed trajectory-level token to frame-specific information. The latter unifies segmentation for all frames into a single structure, enabling the proposed framework to be simplified and end-to-end trainable. Extensive experiments on referring and reasoning video segmentation datasets demonstrate the effectiveness of TrajSeg, which outperforms all video reasoning segmentation methods on all metrics. The code will be publicly available at https://github.com/haodi19/TrajSeg.
Problem

Research questions and friction points this paper is trying to address.

video reasoning segmentation
trajectory perception
multimodal large language models
text-trajectory alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

bidirectional text-trajectory alignment
trajectory-aware MLLM
frame-level content integration
unified mask decoder
video reasoning segmentation
πŸ”Ž Similar Papers