Context-Aware Visualization for Explainable AI Recommendations in Social Media: A Vision for User-Aligned Explanations

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Social media AI recommendation systems suffer from low interpretability and user trust due to generic, context-agnostic explanations that fail to align with individual user needs. To address this, we propose a user-clustering and context-aware visual explanation framework that, for the first time, jointly adapts explanation modality (visual vs. numerical) and granularity (expert-level vs. lay-user-level) within a unified architecture—enabling personalized, context-sensitive multimodal explanation generation. Our method integrates user profiling, dynamic contextual awareness, and differentiated explanation design to support both technical and concise explanation outputs. Evaluated in a public pilot study with 30 users, the system significantly improves users’ comprehension of recommendation logic (+42%) and trust in recommendations (+38%). This work establishes a new paradigm for explainable recommendation systems that balances practicality and adaptability.

Technology Category

Application Category

📝 Abstract
Social media platforms today strive to improve user experience through AI recommendations, yet the value of such recommendations vanishes as users do not understand the reasons behind them. This issue arises because explainability in social media is general and lacks alignment with user-specific needs. In this vision paper, we outline a user-segmented and context-aware explanation layer by proposing a visual explanation system with diverse explanation methods. The proposed system is framed by the variety of user needs and contexts, showing explanations in different visualized forms, including a technically detailed version for AI experts and a simplified one for lay users. Our framework is the first to jointly adapt explanation style (visual vs. numeric) and granularity (expert vs. lay) inside a single pipeline. A public pilot with 30 X users will validate its impact on decision-making and trust.
Problem

Research questions and friction points this paper is trying to address.

Lack of user-aligned explanations in AI recommendations
General explainability not meeting user-specific needs
Need for adaptable visual and granular explanation styles
Innovation

Methods, ideas, or system contributions that make the work stand out.

User-segmented context-aware visual explanation system
Adapts explanation style and granularity jointly
Diverse visualized forms for different user needs
🔎 Similar Papers
No similar papers found.