🤖 AI Summary
This study addresses low user self-disclosure in conversational user interfaces (CUIs) stemming from concerns about social evaluation. To enhance psychological safety, we propose increasing CUI theory-of-mind (ToM) transparency. Methodologically, we introduce ToM into CUI design for the first time—explicitly revealing the system’s reasoning process and expressing uncertainty—to enable socially aware interaction. Integrating human-computer interaction theory, social cognition models, and psychological safety theory, we develop an actionable design framework. Empirical results demonstrate significant improvements in user trust and depth of self-disclosure. Key contributions include: (1) pioneering the application of ToM to CUI transparency design; (2) proposing a disclosure-facilitating framework that jointly supports cognitive credibility and affective safety; and (3) providing a reusable theoretical pathway and practical paradigm for affectively intelligent dialogue systems. (149 words)
📝 Abstract
Self-disclosure is important to help us feel better, yet is often difficult. This difficulty can arise from how we think people are going to react to our self-disclosure. In this workshop paper, we briefly discuss self-disclosure to conversational user interfaces (CUIs) in relation to various social cues. We then, discuss how expressions of uncertainty or representation of a CUI's reasoning could help encourage self-disclosure, by making a CUI's intended "theory of mind" more transparent to users.