🤖 AI Summary
This work addresses the performance degradation of models in communication scenario recognition caused by data scarcity and imbalanced distributions. To this end, the authors propose FilterLoss, a novel weighted loss function that integrates sample quality assessment with a dynamic weight assignment mechanism. During transfer learning, FilterLoss steers the model toward high-value samples while effectively mitigating interference from boundary noise. By explicitly designing the loss function to accommodate class imbalance, the approach substantially enhances transfer stability under few-shot and skewed data conditions. Experimental results demonstrate that on a highly imbalanced new dataset, the model achieves 92.34% of its original accuracy, confirming the effectiveness and robustness of the proposed method.
📝 Abstract
Communication scene recognition has been widely applied in practice, but using deep learning to address this problem faces challenges such as insufficient data and imbalanced data distribution. To address this, we designed a weighted loss function structure, named FilterLoss, which assigns different loss function weights to different sample points. This allows the deep learning model to focus primarily on high-value samples while appropriately accounting for noisy, boundary-level data points. Additionally, we developed a matching weight filtering algorithm that evaluates the quality of sample points in the input dataset and assigns different weight values to samples based on their quality. By applying this method, when using transfer learning on a highly imbalanced new dataset, the accuracy of the transferred model was restored to 92.34% of the original model's performance. Our experiments also revealed that using this loss function structure allowed the model to maintain good stability despite insufficient and imbalanced data.