Unraveling Emotions with Pre-Trained Models

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Open-domain sentiment recognition faces challenges including contextual ambiguity, linguistic variation, and difficulty modeling complex, implicit, or mixed emotions. Method: This paper systematically compares fine-tuning versus prompt engineering on large language models (LLMs), proposing a structured prompt design—incorporating role specification and stepwise reasoning—and a fine-grained sentiment category aggregation strategy that groups semantically similar emotions into higher-order dimensions. Experiments are conducted across multiple scenarios using pretrained Transformer models. Contribution/Results: Structured prompting and semantic emotion grouping significantly enhance LLMs’ ability to discern implicit and compound sentiments. The optimized prompt-based approach achieves >70% accuracy without fine-tuning—approaching fine-tuned model performance—while demonstrating superior generalizability. Key findings: (1) principled emotion grouping mitigates label sparsity; (2) structured prompts effectively elicit sentiment reasoning chains in LLMs. This work provides a reproducible, lightweight adaptation methodology for LLMs in low-resource, high-ambiguity sentiment analysis tasks.

Technology Category

Application Category

📝 Abstract
Transformer models have significantly advanced the field of emotion recognition. However, there are still open challenges when exploring open-ended queries for Large Language Models (LLMs). Although current models offer good results, automatic emotion analysis in open texts presents significant challenges, such as contextual ambiguity, linguistic variability, and difficulty interpreting complex emotional expressions. These limitations make the direct application of generalist models difficult. Accordingly, this work compares the effectiveness of fine-tuning and prompt engineering in emotion detection in three distinct scenarios: (i) performance of fine-tuned pre-trained models and general-purpose LLMs using simple prompts; (ii) effectiveness of different emotion prompt designs with LLMs; and (iii) impact of emotion grouping techniques on these models. Experimental tests attain metrics above 70% with a fine-tuned pre-trained model for emotion recognition. Moreover, the findings highlight that LLMs require structured prompt engineering and emotion grouping to enhance their performance. These advancements improve sentiment analysis, human-computer interaction, and understanding of user behavior across various domains.
Problem

Research questions and friction points this paper is trying to address.

Addresses emotion recognition challenges in open-ended text analysis
Compares fine-tuning versus prompt engineering for emotion detection
Evaluates emotion grouping techniques for large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned pre-trained models for emotion recognition
Structured prompt engineering with Large Language Models
Emotion grouping techniques to enhance model performance
A
Alejandro Pajón-Sanmartín
Information Technologies Group, atlanTTic, University of Vigo, Vigo, Spain
F
Francisco De Arriba-Pérez
Information Technologies Group, atlanTTic, University of Vigo, Vigo, Spain
S
Silvia García-Méndez
Information Technologies Group, atlanTTic, University of Vigo, Vigo, Spain
F
Fátima Leal
Research on Economics, Management and Information Technologies, Universidade Portucalense, Porto, Portugal
B
Benedita Malheiro
ISEP, Polytechnic of Porto, Porto, Portugal & Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
Juan Carlos Burguillo-Rial
Juan Carlos Burguillo-Rial
Full Professor at AtlanTTic Research Center, University of Vigo
Intelligent SystemsEvolutionary Game TheorySelf-organization