🤖 AI Summary
This work addresses the limited sentiment analysis capability of lightweight models. We propose a two-stage, objective-oriented knowledge distillation framework: (1) Knowledge-Driven Distillation (KnowDist) to decouple fine-grained sentiment knowledge from large language models (LLMs); and (2) In-Context Learning Distillation (ICLDist) to achieve task-specific prompt alignment and contextual modeling. To rigorously evaluate progress, we introduce SentiBench—the first comprehensive sentiment analysis benchmark covering 12 diverse datasets—and design a multi-stage collaborative distillation paradigm. Experimental results demonstrate that, with parameter counts reduced by ≥90% (i.e., ≤10% of the teacher LMs), our method consistently outperforms state-of-the-art compact models across all SentiBench tasks. This validates the effectiveness and generalizability of decoupling sentiment knowledge transfer from task-specific alignment—a key conceptual and technical advancement in efficient sentiment modeling.
📝 Abstract
This paper presents a compact model that achieves strong sentiment analysis capabilities through targeted distillation from advanced large language models (LLMs). Our methodology decouples the distillation target into two key components: sentiment-related knowledge and task alignment. To transfer these components, we propose a two-stage distillation framework. The first stage, knowledge-driven distillation ( extsc{KnowDist}), transfers sentiment-related knowledge to enhance fundamental sentiment analysis capabilities. The second stage, in-context learning distillation ( extsc{ICLDist}), transfers task-specific prompt-following abilities to optimize task alignment. For evaluation, we introduce extsc{SentiBench}, a comprehensive sentiment analysis benchmark comprising 3 task categories across 12 datasets. Experiments on this benchmark demonstrate that our model effectively balances model size and performance, showing strong competitiveness compared to existing small-scale LLMs.