🤖 AI Summary
To address poor minority-class recognition performance caused by severe label imbalance in code comment intent classification, this paper proposes a RoBERTa fine-tuning framework incorporating adaptive loss weighting—specifically Focal Loss and class-balanced weighting—optimized via Bayesian hyperparameter tuning. It presents the first systematic empirical evaluation of multiple loss-weighting strategies for this task, significantly enhancing model discrimination capability for sparse comment categories. On the NLBSE’25 competition dataset, the proposed method achieves an 8.9% absolute improvement in average F1<sub>c</sub> over the STACC baseline; it outperforms the baseline on 17 out of 19 subtasks, with a maximum gain of 38.2%. These results demonstrate the method’s strong generalizability and practical effectiveness in mitigating label imbalance in code comment understanding.
📝 Abstract
Developers rely on code comments to document their work, track issues, and understand the source code. As such, comments provide valuable insights into developers' understanding of their code and describe their various intentions in writing the surrounding code. Recent research leverages natural language processing and deep learning to classify comments based on developers' intentions. However, such labelled data are often imbalanced, causing learning models to perform poorly. This work investigates the use of different weighting strategies of the loss function to mitigate the scarcity of certain classes in the dataset. In particular, various RoBERTa-based transformer models are fine-tuned by means of a hyperparameter search to identify their optimal parameter configurations. Additionally, we fine-tuned the transformers with different weighting strategies for the loss function to address class imbalances. Our approach outperforms the STACC baseline by 8.9 per cent on the NLBSE'25 Tool Competition dataset in terms of the average F1$_c$ score, and exceeding the baseline approach in 17 out of 19 cases with a gain ranging from -5.0 to 38.2. The source code is publicly available at https://github.com/moritzmock/NLBSE2025.