🤖 AI Summary
To address the challenges of large-scale multi-camera traffic video data, high processing latency, and difficulty in generating accurate textual descriptions in real time, this paper proposes TrafficLens—a novel algorithm for efficient video-to-text translation. TrafficLens introduces a stepwise visual-language model (VLM) inference mechanism, jointly leveraging temporal correlation modeling across camera coverage regions and object-level similarity detection to dynamically skip redundant frames during VLM computation. Furthermore, it integrates retrieval-augmented generation (RAG) to enable context-aware, progressive text generation. Experimental results demonstrate that TrafficLens achieves up to a 4× speedup in video-to-text conversion while maintaining high descriptive accuracy—specifically, a negligible degradation of less than 1.2 BLEU-4 points. This advancement significantly improves both real-time performance and scalability for multi-view traffic event analysis.
📝 Abstract
Traffic cameras are essential in urban areas, playing a crucial role in intelligent transportation systems. Multiple cameras at intersections enhance law enforcement capabilities, traffic management, and pedestrian safety. However, efficiently managing and analyzing multi-camera feeds poses challenges due to the vast amount of data. Analyzing such huge video data requires advanced analytical tools. While Large Language Models (LLMs) like ChatGPT, equipped with retrieval-augmented generation (RAG) systems, excel in text-based tasks, integrating them into traffic video analysis demands converting video data into text using a Vision-Language Model (VLM), which is time-consuming and delays the timely utilization of traffic videos for generating insights and investigating incidents. To address these challenges, we propose TrafficLens, a tailored algorithm for multi-camera traffic intersections. TrafficLens employs a sequential approach, utilizing overlapping coverage areas of cameras. It iteratively applies VLMs with varying token limits, using previous outputs as prompts for subsequent cameras, enabling rapid generation of detailed textual descriptions while reducing processing time. Additionally, TrafficLens intelligently bypasses redundant VLM invocations through an object-level similarity detector. Experimental results with real-world datasets demonstrate that TrafficLens reduces video-to-text conversion time by up to $4 imes$ while maintaining information accuracy.