π€ AI Summary
To address viewpoint sensitivity and insufficient fine-grained representation in gait recognition, this paper proposes DepthGaitβa novel multimodal framework that fuses temporally aligned depth maps (estimated from RGB sequences) with binary silhouette sequences. Methodologically, it employs a lightweight depth estimation network to generate depth sequences, leverages a multi-scale CNN for spatiotemporal feature extraction, and introduces a cross-level, cross-modal fusion module that explicitly aligns and complements geometric and dynamic cues from depth and silhouettes, thereby mitigating modality discrepancy. Evaluated on CASIA-B and OU-MVLP benchmarks, DepthGait achieves state-of-the-art Rank-1 accuracy, notably improving performance by up to 3.2% under large viewpoint variations (>45Β°). These results demonstrate the effectiveness of jointly enhancing fine-grained gait modeling and viewpoint robustness through complementary multimodal representation.
π Abstract
Robust gait recognition requires highly discriminative representations, which are closely tied to input modalities. While binary silhouettes and skeletons have dominated recent literature, these 2D representations fall short of capturing sufficient cues that can be exploited to handle viewpoint variations, and capture finer and meaningful details of gait. In this paper, we introduce a novel framework, termed DepthGait, that incorporates RGB-derived depth maps and silhouettes for enhanced gait recognition. Specifically, apart from the 2D silhouette representation of the human body, the proposed pipeline explicitly estimates depth maps from a given RGB image sequence and uses them as a new modality to capture discriminative features inherent in human locomotion. In addition, a novel multi-scale and cross-level fusion scheme has also been developed to bridge the modality gap between depth maps and silhouettes. Extensive experiments on standard benchmarks demonstrate that the proposed DepthGait achieves state-of-the-art performance compared to peer methods and attains an impressive mean rank-1 accuracy on the challenging datasets.