🤖 AI Summary
To address the performance degradation of BERT fine-tuning for sentiment analysis in low-resource languages (Slovak, Maltese, Icelandic, Turkish) due to severe scarcity of labeled data, this paper proposes a dynamic sample scheduling framework integrating active learning with hierarchical data clustering. We introduce an “active learning scheduler” that leverages cluster structure to guide iterative, informative sample selection under strict annotation budget constraints, thereby enhancing data utilization efficiency without compromising model stability. Experimental results demonstrate that our approach reduces human annotation effort by 30% on average across the four target languages while improving F1 scores by up to 4.0 percentage points. These findings validate that structural awareness in dynamic scheduling effectively alleviates the data bottleneck in low-resource settings, offering a scalable, lightweight optimization paradigm for cross-lingual sentiment analysis.
📝 Abstract
Limited data for low-resource languages typically yield weaker language models (LMs). Since pre-training is compute-intensive, it is more pragmatic to target improvements during fine-tuning. In this work, we examine the use of Active Learning (AL) methods augmented by structured data selection strategies which we term 'Active Learning schedulers', to boost the fine-tuning process with a limited amount of training data. We connect the AL to data clustering and propose an integrated fine-tuning pipeline that systematically combines AL, clustering, and dynamic data selection schedulers to enhance model's performance. Experiments in the Slovak, Maltese, Icelandic and Turkish languages show that the use of clustering during the fine-tuning phase together with AL scheduling can simultaneously produce annotation savings up to 30% and performance improvements up to four F1 score points, while also providing better fine-tuning stability.