🤖 AI Summary
Existing streaming video understanding methods are predominantly reactive—relying on perception-action loops or asynchronous triggers—lacking task-driven planning and future prediction capabilities, thus limiting their applicability in proactive decision-making scenarios such as autonomous driving. To address this, we propose a novel intelligent agent framework endowed with future anticipation. Our approach introduces, for the first time, a task-oriented forward-looking reasoning mechanism, synergistically integrating spatiotemporal attention alignment and hierarchical streaming key-value (KV) caching. This enables proactive focus on salient regions and critical temporal segments, accurate forecasting of event temporal evolution, and dynamic adaptation of perception–action policies. The framework supports efficient semantic retrieval and incremental context updating, substantially reducing computational overhead for long sequences. Extensive experiments on multiple streaming and long-video understanding benchmarks demonstrate superior trade-offs between response accuracy and inference efficiency, validating its practicality and state-of-the-art performance in real-time applications.
📝 Abstract
Real-time streaming video understanding in domains such as autonomous driving and intelligent surveillance poses challenges beyond conventional offline video processing, requiring continuous perception, proactive decision making, and responsive interaction based on dynamically evolving visual content. However, existing methods rely on alternating perception-reaction or asynchronous triggers, lacking task-driven planning and future anticipation, which limits their real-time responsiveness and proactive decision making in evolving video streams. To this end, we propose a StreamAgent that anticipates the temporal intervals and spatial regions expected to contain future task-relevant information to enable proactive and goal-driven responses. Specifically, we integrate question semantics and historical observations through prompting the anticipatory agent to anticipate the temporal progression of key events, align current observations with the expected future evidence, and subsequently adjust the perception action (e.g., attending to task-relevant regions or continuously tracking in subsequent frames). To enable efficient inference, we design a streaming KV-cache memory mechanism that constructs a hierarchical memory structure for selective recall of relevant tokens, enabling efficient semantic retrieval while reducing the overhead of storing all tokens in the traditional KV-cache. Extensive experiments on streaming and long video understanding tasks demonstrate that our method outperforms existing methods in response accuracy and real-time efficiency, highlighting its practical value for real-world streaming scenarios.