🤖 AI Summary
Existing facial landmark detection methods suffer from weak generalization and difficulty in building unified models due to inconsistent landmark definitions across datasets and the prevalent single-dataset training paradigm. To address this, we propose Proto-Former—a novel framework enabling the first end-to-end joint training across multiple datasets. Proto-Former introduces an Adaptive Prototype Encoding Architecture (APAE) and a Progressive Prototype Decoding Architecture (PPAD), incorporating a prototype-aware mechanism and learnable prototype experts. Furthermore, we design a Prototype Alignment (PA) loss to effectively mitigate gradient conflicts and instability in expert assignment. Extensive experiments on benchmark datasets—including AFLW, 300W, and COFW—demonstrate that Proto-Former significantly outperforms state-of-the-art methods, achieving both superior cross-dataset generalization and higher landmark detection accuracy.
📝 Abstract
Recent advances in deep learning have significantly improved facial landmark detection. However, existing facial landmark detection datasets often define different numbers of landmarks, and most mainstream methods can only be trained on a single dataset. This limits the model generalization to different datasets and hinders the development of a unified model. To address this issue, we propose Proto-Former, a unified, adaptive, end-to-end facial landmark detection framework that explicitly enhances dataset-specific facial structural representations (i.e., prototype). Proto-Former overcomes the limitations of single-dataset training by enabling joint training across multiple datasets within a unified architecture. Specifically, Proto-Former comprises two key components: an Adaptive Prototype-Aware Encoder (APAE) that performs adaptive feature extraction and learns prototype representations, and a Progressive Prototype-Aware Decoder (PPAD) that refines these prototypes to generate prompts that guide the model's attention to key facial regions. Furthermore, we introduce a novel Prototype-Aware (PA) loss, which achieves optimal path finding by constraining the selection weights of prototype experts. This loss function effectively resolves the problem of prototype expert addressing instability during multi-dataset training, alleviates gradient conflicts, and enables the extraction of more accurate facial structure features. Extensive experiments on widely used benchmark datasets demonstrate that our Proto-Former achieves superior performance compared to existing state-of-the-art methods. The code is publicly available at: https://github.com/Husk021118/Proto-Former.