Talk, Snap, Complain: Validation-Aware Multimodal Expert Framework for Fine-Grained Customer Grievances

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing complaint analysis methods are largely confined to unimodal, short-text inputs (e.g., tweets), rendering them inadequate for complex, multimodal, multi-turn customer support dialogues containing both textual complaints and visual evidence (e.g., screenshots, product images). This work proposes VALOR, the first framework to systematically model such multimodal, multi-turn support conversations for fine-grained joint classification of complaint aspects and severity levels. Its core innovations include: (1) a verification-aware multi-expert reasoning mechanism, (2) a semantic alignment–driven cross-modal fusion strategy, and (3) a chain-of-thought (CoT)–enhanced prompting paradigm. Evaluated on a newly constructed fine-grained multimodal complaint dataset, VALOR significantly outperforms state-of-the-art baselines—particularly under imbalanced text–image distributions—demonstrating strong robustness. This research advances United Nations Sustainable Development Goals SDG 9 (Industry, Innovation and Infrastructure) and SDG 12 (Responsible Consumption and Production).

Technology Category

Application Category

📝 Abstract
Existing approaches to complaint analysis largely rely on unimodal, short-form content such as tweets or product reviews. This work advances the field by leveraging multimodal, multi-turn customer support dialogues, where users often share both textual complaints and visual evidence (e.g., screenshots, product photos) to enable fine-grained classification of complaint aspects and severity. We introduce VALOR, a Validation-Aware Learner with Expert Routing, tailored for this multimodal setting. It employs a multi-expert reasoning setup using large-scale generative models with Chain-of-Thought (CoT) prompting for nuanced decision-making. To ensure coherence between modalities, a semantic alignment score is computed and integrated into the final classification through a meta-fusion strategy. In alignment with the United Nations Sustainable Development Goals (UN SDGs), the proposed framework supports SDG 9 (Industry, Innovation and Infrastructure) by advancing AI-driven tools for robust, scalable, and context-aware service infrastructure. Further, by enabling structured analysis of complaint narratives and visual context, it contributes to SDG 12 (Responsible Consumption and Production) by promoting more responsive product design and improved accountability in consumer services. We evaluate VALOR on a curated multimodal complaint dataset annotated with fine-grained aspect and severity labels, showing that it consistently outperforms baseline models, especially in complex complaint scenarios where information is distributed across text and images. This study underscores the value of multimodal interaction and expert validation in practical complaint understanding systems. Resources related to data and codes are available here: https://github.com/sarmistha-D/VALOR
Problem

Research questions and friction points this paper is trying to address.

Analyzing multimodal customer complaints with text and visual evidence
Classifying fine-grained complaint aspects and severity levels
Ensuring semantic coherence between different information modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal expert framework using generative models
Validation-aware routing with semantic alignment scoring
Chain-of-Thought prompting for fine-grained classification
🔎 Similar Papers
No similar papers found.
R
Rishu Kumar Singh
Indian Institute of Technology Patna, India
N
Navneet Shreya
National Institute of Technology Patna, India
Sarmistha Das
Sarmistha Das
Indian Institute Of Technology Patna
MLDLNLPFinTEch
A
A. Singh
Fondazione Bruno Kessler, Italy
S
Sriparna Saha
Indian Institute of Technology Patna, India