🤖 AI Summary
This work addresses the problem of detecting dynamically evolving human crowds in video, overcoming the limitation of existing methods that assume static crowd configurations. The proposed method introduces a temporal modeling framework that jointly leverages vision-language model (VLM)-enhanced local appearance features and global scene context. Specifically, CLIP is employed to extract semantically robust frame-level crowdness features; a cross-frame crowdness graph is then constructed to explicitly model member identity consistency and temporal structural evolution; finally, graph optimization enforces globally temporally consistent crowd partitioning. Evaluated on multiple public benchmarks, the approach achieves significant improvements over state-of-the-art methods and, for the first time, enables high-precision detection of complex dynamic crowd behaviors—including splitting, merging, and reconfiguration. This work establishes a novel paradigm for social behavior analysis in video understanding.
📝 Abstract
This paper proposes dynamic human group detection in videos. For detecting complex groups, not only the local appearance features of in-group members but also the global context of the scene are important. Such local and global appearance features in each frame are extracted using a Vision-Language Model (VLM) augmented for group detection in our method. For further improvement, the group structure should be consistent over time. While previous methods are stabilized on the assumption that groups are not changed in a video, our method detects dynamically changing groups by global optimization using a graph with all frames' groupness probabilities estimated by our groupness-augmented CLIP features. Our experimental results demonstrate that our method outperforms state-of-the-art group detection methods on public datasets. Code: https://github.com/irajisamurai/VLM-GroupDetection.git