🤖 AI Summary
Existing facial clustering methods rely on Jaccard similarity to enhance embedding relationship modeling but suffer from spurious node inclusion, degrading discriminability; moreover, the optimal Top-K neighborhood size is difficult to predict adaptively, and conventional Transformers introduce noise by over-modeling irrelevant features. To address these issues, we propose a prediction-driven Top-K Jaccard similarity framework coupled with a sparse differential Transformer architecture. First, we introduce a novel *prediction-guided Top-K neighborhood selection* mechanism that dynamically refines neighborhood purity based on clustering confidence. Second, we design a *sparse differential Transformer*, which jointly employs differential feature representation and sparse attention to suppress irrelevant responses, thereby significantly improving robustness and noise resilience in similarity estimation. Extensive experiments on MS-Celeb-1M and other benchmarks demonstrate state-of-the-art performance: our method achieves superior clustering accuracy and generalization compared to existing approaches.
📝 Abstract
The method used to measure relationships between face embeddings plays a crucial role in determining the performance of face clustering. Existing methods employ the Jaccard similarity coefficient instead of the cosine distance to enhance the measurement accuracy. However, these methods introduce too many irrelevant nodes, producing Jaccard coefficients with limited discriminative power and adversely affecting clustering performance. To address this issue, we propose a prediction-driven Top-K Jaccard similarity coefficient that enhances the purity of neighboring nodes, thereby improving the reliability of similarity measurements. Nevertheless, accurately predicting the optimal number of neighbors (Top-K) remains challenging, leading to suboptimal clustering results. To overcome this limitation, we develop a Transformer-based prediction model that examines the relationships between the central node and its neighboring nodes near the Top-K to further enhance the reliability of similarity estimation. However, vanilla Transformer, when applied to predict relationships between nodes, often introduces noise due to their overemphasis on irrelevant feature relationships. To address these challenges, we propose a Sparse Differential Transformer (SDT), instead of the vanilla Transformer, to eliminate noise and enhance the model's anti-noise capabilities. Extensive experiments on multiple datasets, such as MS-Celeb-1M, demonstrate that our approach achieves state-of-the-art (SOTA) performance, outperforming existing methods and providing a more robust solution for face clustering.