🤖 AI Summary
To address inefficiency and challenges in modeling long-range temporal dependencies in long-video understanding, this paper proposes a hybrid Mamba-Transformer architecture: Mamba efficiently captures global temporal dynamics, while Transformer enhances local fine-grained interactions. We introduce the TransV module, which enables directed information compression and transfer from visual tokens to instruction tokens, significantly mitigating visual redundancy. Furthermore, we unify the visual encoder with the large language model to improve cross-modal alignment and temporal reasoning. Our method achieves state-of-the-art performance across multiple long-video understanding benchmarks, supports end-to-end processing of videos exceeding 10,000 frames (hour-scale), and extends frame capacity by 2–3× over existing approaches. Notably, we uncover a dynamic division of labor between attention mechanisms within the hybrid architecture—revealing how Mamba and Transformer complementarily specialize across temporal scales. This work establishes a novel paradigm for efficient multimodal modeling of long-duration video.
📝 Abstract
We introduce TimeViper, a hybrid vision-language model designed to tackle challenges of long video understanding. Processing long videos demands both an efficient model architecture and an effective mechanism for handling extended temporal contexts. To this end, TimeViper adopts a hybrid Mamba-Transformer backbone that combines the efficiency of state-space models with the expressivity of attention mechanisms. Through this hybrid design, we reveal the vision-to-text information aggregation phenomenon, where information progressively flows from vision tokens to text tokens across increasing LLM depth, resulting in severe vision token redundancy. Motivated by this observation, we propose TransV, a token information transfer module that transfers and compresses vision tokens into instruction tokens while maintaining multimodal understanding capabilities. This design enables TimeViper to process hour-long videos exceeding 10,000 frames. Extensive experiments across multiple benchmarks demonstrate that TimeViper competes with state-of-the-art models while extending frame numbers. We further analyze attention behaviors of both Mamba and Transformer layers, offering new insights into hybrid model interpretability. This work represents an initial step towards developing, interpreting, and compressing hybrid Mamba-Transformer architectures.