🤖 AI Summary
This work addresses two key challenges in customer feedback analysis: (1) difficulty in attributing emotions to specific textual spans, and (2) semantic mismatch between user queries and feedback documents. To tackle these, we propose an emotion-attribution-oriented extractive summarization method. Our approach introduces a multi-bias correction framework: at the domain-agnostic level, it mitigates the language gap via emotion-aware bias encoding and sentiment lexicon–guided query expansion to enhance semantic alignment; it further employs a query-focused, end-to-end neural extractor to precisely locate emotional triggers and generate concise, attribution-grounded summaries. Experiments on our newly constructed real-world Emotion-Aware Query-Focused Summarization (QFS) dataset demonstrate that our method significantly outperforms existing baselines, achieving substantial improvements in both summary relevance and explanatory fidelity—two core evaluation metrics. The proposed framework thus provides an interpretable and deployable technical solution for constructive customer feedback analysis.
📝 Abstract
Constructive analysis of feedback from clients often requires determining the cause of their sentiment from a substantial amount of text documents. To assist and improve the productivity of such endeavors, we leverage the task of Query-Focused Summarization (QFS). Models of this task are often impeded by the linguistic dissonance between the query and the source documents. We propose and substantiate a multi-bias framework to help bridge this gap at a domain-agnostic, generic level; we then formulate specialized approaches for the problem of sentiment explanation through sentiment-based biases and query expansion. We achieve experimental results outperforming baseline models on a real-world proprietary sentiment-aware QFS dataset.