🤖 AI Summary
To address inaccurate spatial representation of fast-moving objects and the neglect of temporally sensitive regions in multimodal video understanding, this paper proposes Dynamic Image (DynImg): a method that leverages non-key frames as visual prompts to enhance spatial feature encoding of motion regions, and introduces a 4D Rotary Position Embedding to explicitly model spatiotemporal adjacency. By integrating a multimodal large language model with a visual prompting mechanism, DynImg achieves joint spatiotemporal modeling without significant computational overhead. Evaluated on multiple mainstream video understanding benchmarks, DynImg achieves an average performance improvement of approximately 2% over existing state-of-the-art methods, demonstrating the effectiveness and generalizability of its dynamic spatiotemporal representation.
📝 Abstract
In recent years, the introduction of Multi-modal Large Language Models (MLLMs) into video understanding tasks has become increasingly prevalent. However, how to effectively integrate temporal information remains a critical research focus. Traditional approaches treat spatial and temporal information separately. Due to issues like motion blur, it is challenging to accurately represent the spatial information of rapidly moving objects. This can lead to temporally important regions being underemphasized during spatial feature extraction, which in turn hinders accurate spatio-temporal interaction and video understanding. To address this limitation, we propose an innovative video representation method called Dynamic-Image (DynImg). Specifically, we introduce a set of non-key frames as temporal prompts to highlight the spatial areas containing fast-moving objects. During the process of visual feature extraction, these prompts guide the model to pay additional attention to the fine-grained spatial features corresponding to these regions. Moreover, to maintain the correct sequence for DynImg, we employ a corresponding 4D video Rotary Position Embedding. This retains both the temporal and spatial adjacency of DynImg, helping MLLM understand the spatio-temporal order within this combined format. Experimental evaluations reveal that DynImg surpasses the state-of-the-art methods by approximately 2% across multiple video understanding benchmarks, proving the effectiveness of our temporal prompts in enhancing video comprehension.