Targeted Distillation for Sentiment Analysis

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited sentiment analysis capability of lightweight models. We propose a two-stage, objective-oriented knowledge distillation framework: (1) Knowledge-Driven Distillation (KnowDist) to decouple fine-grained sentiment knowledge from large language models (LLMs); and (2) In-Context Learning Distillation (ICLDist) to achieve task-specific prompt alignment and contextual modeling. To rigorously evaluate progress, we introduce SentiBench—the first comprehensive sentiment analysis benchmark covering 12 diverse datasets—and design a multi-stage collaborative distillation paradigm. Experimental results demonstrate that, with parameter counts reduced by ≥90% (i.e., ≤10% of the teacher LMs), our method consistently outperforms state-of-the-art compact models across all SentiBench tasks. This validates the effectiveness and generalizability of decoupling sentiment knowledge transfer from task-specific alignment—a key conceptual and technical advancement in efficient sentiment modeling.

Technology Category

Application Category

📝 Abstract
This paper presents a compact model that achieves strong sentiment analysis capabilities through targeted distillation from advanced large language models (LLMs). Our methodology decouples the distillation target into two key components: sentiment-related knowledge and task alignment. To transfer these components, we propose a two-stage distillation framework. The first stage, knowledge-driven distillation ( extsc{KnowDist}), transfers sentiment-related knowledge to enhance fundamental sentiment analysis capabilities. The second stage, in-context learning distillation ( extsc{ICLDist}), transfers task-specific prompt-following abilities to optimize task alignment. For evaluation, we introduce extsc{SentiBench}, a comprehensive sentiment analysis benchmark comprising 3 task categories across 12 datasets. Experiments on this benchmark demonstrate that our model effectively balances model size and performance, showing strong competitiveness compared to existing small-scale LLMs.
Problem

Research questions and friction points this paper is trying to address.

Compact model for sentiment analysis
Targeted distillation from large language models
Balancing model size and performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage distillation framework for sentiment analysis
Knowledge-driven distillation enhances sentiment capabilities
In-context learning optimizes task-specific alignment
🔎 Similar Papers
No similar papers found.
Y
Yice Zhang
Harbin Institute of Technology, Shenzhen, China; Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies
G
Guangyu Xie
Harbin Institute of Technology, Shenzhen, China; Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies
J
Jingjie Lin
Harbin Institute of Technology, Shenzhen, China; Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies
Jianzhu Bao
Jianzhu Bao
Nanyang Technological University
NLPComputational ArgumentationLarge Language ModelsSentiment Analysis
Q
Qianlong Wang
Harbin Institute of Technology, Shenzhen, China; Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies
X
Xi Zeng
The 30th Research Institute of China Electronics Technology Group Corporation
Ruifeng Xu
Ruifeng Xu
Professor, Harbin Institute of Technology at Shenzhen
Natural Language ProcessingAffective ComputingArgumentation MiningLLMsBioinformatics