🤖 AI Summary
This study addresses the scarcity of fine-grained sentiment analysis benchmarks for Vietnamese by introducing the first multi-class sentiment corpus comprising 20,664 social media comments, each annotated with 27 fine-grained emotion labels. The authors systematically evaluate the performance of eight pretrained Transformer models—including ViSoBERT, PhoBERT, and CafeBERT—under three emoji preprocessing strategies: preserving original emojis, converting them to textual descriptions, and applying ViSoLex lexical normalization. Experimental results demonstrate that ViSoBERT achieves the best performance, with a Macro F1 score of 61.50% and a Weighted F1 score of 63.26%. Furthermore, the choice of emoji preprocessing significantly impacts model effectiveness, underscoring the critical importance of high-quality annotations and appropriate preprocessing techniques in enhancing sentiment analysis performance for Vietnamese.
📝 Abstract
Emotion classification plays a significant role in emotion prediction and harmful content detection. Recent advancements in NLP, particularly through large language models (LLMs), have greatly improved outcomes in this field. This study introduces ViGoEmotions -- a Vietnamese emotion corpus comprising 20,664 social media comments in which each comment is classified into 27 fine-grained distinct emotions. To evaluate the quality of the dataset and its impact on emotion classification, eight pre-trained Transformer-based models were evaluated under three preprocessing strategies: preserving original emojis with rule-based normalization, converting emojis into textual descriptions, and applying ViSoLex, a model-based lexical normalization system. Results show that converting emojis into text often improves the performance of several BERT-based baselines, while preserving emojis yields the best results for ViSoBERT and CafeBERT. In contrast, removing emojis generally leads to lower performance. ViSoBERT achieved the highest Macro F1-score of 61.50% and Weighted F1-score of 63.26%. Strong performance was also observed from CafeBERT and PhoBERT. These findings highlight that while the proposed corpus can support diverse architectures effectively, preprocessing strategies and annotation quality remain key factors influencing downstream performance.