🤖 AI Summary
This work systematically evaluates the accuracy-efficiency trade-off of Sentence-BERT in multi-label code comment classification. Using a manually annotated dataset of 13,216 statements, we fine-tune Sentence-BERT and design multiple lightweight multi-label classification heads to maximize F1-score while preserving inference feasibility. We conduct the first quantitative analysis of model size impact on latency (+1.4×) and computational cost (GFLOPS +2.1×), and propose a balanced deployment strategy that improves F1 by 0.0346 without increasing inference overhead. Our core contribution lies in uncovering the co-optimization principle between semantic encoders and classification heads in code-domain multi-label classification, and establishing a reproducible, lightweight deployment paradigm that jointly optimizes encoder capacity and head architecture for practical code understanding systems.
📝 Abstract
This work evaluates Sentence-BERT for a multi-label code comment classification task seeking to maximize the classification performance while controlling efficiency constraints during inference. Using a dataset of 13,216 labeled comment sentences, Sentence-BERT models are fine-tuned and combined with different classification heads to recognize comment types. While larger models outperform smaller ones in terms of F1, the latter offer outstanding efficiency, both in runtime and GFLOPS. As result, a balance between a reasonable F1 improvement (+0.0346) and a minimal efficiency degradation (+1.4x in runtime and +2.1x in GFLOPS) is reached.