🤖 AI Summary
Social media AI recommendation systems suffer from low interpretability and user trust due to generic, context-agnostic explanations that fail to align with individual user needs. To address this, we propose a user-clustering and context-aware visual explanation framework that, for the first time, jointly adapts explanation modality (visual vs. numerical) and granularity (expert-level vs. lay-user-level) within a unified architecture—enabling personalized, context-sensitive multimodal explanation generation. Our method integrates user profiling, dynamic contextual awareness, and differentiated explanation design to support both technical and concise explanation outputs. Evaluated in a public pilot study with 30 users, the system significantly improves users’ comprehension of recommendation logic (+42%) and trust in recommendations (+38%). This work establishes a new paradigm for explainable recommendation systems that balances practicality and adaptability.
📝 Abstract
Social media platforms today strive to improve user experience through AI recommendations, yet the value of such recommendations vanishes as users do not understand the reasons behind them. This issue arises because explainability in social media is general and lacks alignment with user-specific needs. In this vision paper, we outline a user-segmented and context-aware explanation layer by proposing a visual explanation system with diverse explanation methods. The proposed system is framed by the variety of user needs and contexts, showing explanations in different visualized forms, including a technically detailed version for AI experts and a simplified one for lay users. Our framework is the first to jointly adapt explanation style (visual vs. numeric) and granularity (expert vs. lay) inside a single pipeline. A public pilot with 30 X users will validate its impact on decision-making and trust.