🤖 AI Summary
This study addresses the fine-grained, two-level classification of hazards and products in food recall reports, with particular emphasis on low accuracy for minority classes. We propose a class-specific, word-level data augmentation strategy and systematically evaluate the impact of synonym replacement, random word swapping, and context-aware word insertion on both Transformer-based models (e.g., BERT) and traditional machine learning classifiers. To our knowledge, this is the first work to empirically demonstrate—within an interpretable food hazard classification setting—that context-aware word insertion significantly improves minority-hazard class accuracy (+6%, *p* < 0.05), with gains exhibiting class-specificity rather than universal improvement. Results indicate that augmentation strategies must be tailored per class rather than applied uniformly, and that BERT achieves statistically significant performance gains in fine-grained classification. This work establishes an interpretable, reproducible data augmentation paradigm for few-shot classification in the food safety domain.
📝 Abstract
This paper presents our system developed for the SemEval-2025 Task 9: The Food Hazard Detection Challenge. The shared task's objective is to evaluate explainable classification systems for classifying hazards and products in two levels of granularity from food recall incident reports. In this work, we propose text augmentation techniques as a way to improve poor performance on minority classes and compare their effect for each category on various transformer and machine learning models. We explore three word-level data augmentation techniques, namely synonym replacement, random word swapping, and contextual word insertion. The results show that transformer models tend to have a better overall performance. None of the three augmentation techniques consistently improved overall performance for classifying hazards and products. We observed a statistically significant improvement (P<0.05) in the fine-grained categories when using the BERT model to compare the baseline with each augmented model. Compared to the baseline, the contextual words insertion augmentation improved the accuracy of predictions for the minority hazard classes by 6%. This suggests that targeted augmentation of minority classes can improve the performance of transformer models.