🤖 AI Summary
This work addresses the heterogeneity of emotional expression across languages—arising from cultural differences—in cross-lingual sentiment recognition. We propose a multilingual large language model (mLLM) approach based on language-specific Low-Rank Adaptation (LoRA) fine-tuning. Unlike unified fine-tuning paradigms, our method employs distinct LoRA adapters per language, preserving the mLLM’s general multilingual representation capacity while precisely capturing language-specific affective patterns. Experiments across multiple multilingual sentiment datasets demonstrate significant improvements in both fine-grained sentiment classification and sentiment intensity regression: average accuracy increases by 4.2% over strong baselines, with particularly pronounced gains for low-resource languages. To the best of our knowledge, this is the first systematic study validating the efficacy of language-customized, parameter-efficient fine-tuning for cross-lingual sentiment understanding. Our approach establishes a novel paradigm for culturally adaptive sentiment computation.
📝 Abstract
Detecting emotions across different languages is challenging due to the varied and culturally nuanced ways of emotional expressions. The extit{Semeval 2025 Task 11: Bridging the Gap in Text-Based emotion} shared task was organised to investigate emotion recognition across different languages. The goal of the task is to implement an emotion recogniser that can identify the basic emotional states that general third-party observers would attribute to an author based on their written text snippet, along with the intensity of those emotions. We report our investigation of various task-adaptation strategies for LLMs in emotion recognition. We show that the most effective method for this task is to fine-tune a pre-trained multilingual LLM with LoRA setting separately for each language.