🤖 AI Summary
This work addresses the limitation of existing video large language models, which are predominantly confined to offline processing and struggle to support open-domain, long-duration real-time video stream interaction. To overcome this, the authors propose an end-to-end streaming visual interaction framework that unifies continuous video understanding and real-time response generation within a single architecture, thereby transcending the constraints of conventional decoupled trigger-response mechanisms. The approach integrates context management, tailored data construction, specialized training objectives, and deployment optimizations, and incorporates ASR and TTS modules to enable real-time inference. Evaluated on a streaming video understanding benchmark, the method achieves state-of-the-art performance and demonstrates a real-time system running at 2 FPS on dual 80GB GPUs.
📝 Abstract
Video Large Language Models (VideoLLMs) have achieved strong performance on many video understanding tasks, but most existing systems remain offline and are not well-suited for live video streams that require continuous observation and timely response. Recent streaming VideoLLMs have made progress, yet current approaches often rely on decoupled trigger-response pipelines or are limited to captioning-style narration, reducing their effectiveness for open-ended question answering and long-horizon interaction. We propose AURA (Always-On Understanding and Real-Time Assistance), an end-to-end streaming visual interaction framework that enables a unified VideoLLM to continuously process video streams and support both real-time question answering and proactive responses. AURA integrates context management, data construction, training objectives, and deployment optimization for stable long-horizon streaming interaction. It achieves state-of-the-art performance on streaming benchmarks and supports a real-time demo system with ASR and TTS running at 2 FPS on two 80G accelerators. We release the AURA model together with a real-time inference framework to facilitate future research.