🤖 AI Summary
To address the high computational cost of k-nearest neighbors (kNN) in multi-label classification—stemming from large-scale training sets—this paper pioneers the extension of prototype learning to the multi-label setting. We propose a label-aware prototype generation method that jointly optimizes label structure consistency and instance similarity. Our approach integrates a multi-label distance metric, greedy initialization, iterative optimization guided by label coverage, and an adaptive kNN reweighting mechanism. Experiments across multiple benchmark datasets demonstrate that our method compresses the training set by over 80%, while maintaining or improving macro-F1 score and classification accuracy. Crucially, it significantly reduces inference cost without sacrificing performance. The core contribution is the first interpretable and efficient prototype learning framework specifically designed for multi-label classification, bridging scalability and fidelity in label-space modeling.