๐ค AI Summary
To address the challenges of massive surveillance video data, high storage costs, and inefficient semantic retrieval, this paper proposes an end-to-end generative semantic summarization method based on vision-language models (VLMs). We pioneer the application of VLMs to spatiotemporally consistent, event-level textual summarization of surveillance videos, integrating temporal segmentation encoding, cross-modal alignment, and prompt-driven summarization. This approach overcomes key limitations of conventional action recognition and generic video summarization techniques. Evaluated on a real-world CCTV dataset, our method achieves 80% accuracy in event temporal localization and 70% spatial consistency. The generated summaries achieve >99.9% compression ratio relative to raw video, drastically reducing long-term storage overhead. Moreover, the system enables sub-second event retrieval and supports natural languageโbased interactive querying, enhancing operational efficiency and semantic accessibility in large-scale video surveillance systems.
๐ Abstract
The rapid increase in video content production has resulted in enormous data volumes, creating significant challenges for efficient analysis and resource management. To address this, robust video analysis tools are essential. This paper presents an innovative proof of concept using Generative Artificial Intelligence (GenAI) in the form of Vision Language Models to enhance the downstream video analysis process. Our tool generates customized textual summaries based on user-defined queries, providing focused insights within extensive video datasets. Unlike traditional methods that offer generic summaries or limited action recognition, our approach utilizes Vision Language Models to extract relevant information, improving analysis precision and efficiency. The proposed method produces textual summaries from extensive CCTV footage, which can then be stored for an indefinite time in a very small storage space compared to videos, allowing users to quickly navigate and verify significant events without exhaustive manual review. Qualitative evaluations result in 80% and 70% accuracy in temporal and spatial quality and consistency of the pipeline respectively.