🤖 AI Summary
This study investigates how displaying “visible thinking” in conversational AI—i.e., revealing its reasoning process before responding—affects users’ perceptions of the AI’s empathy, warmth, competence, and engagement. Through a 3 (thinking type: none, emotional-support-oriented, expertise-support-oriented) × 2 (problem context: habitual vs. emotional) mixed-design experiment, employing a controlled dialogue platform, post-interaction questionnaires, and interaction logs, the research provides the first systematic evaluation of the multidimensional psychological effects of visible thinking in sensitive help-seeking scenarios. Results indicate that emotional-support-oriented thinking significantly enhances perceived empathy and warmth, while expertise-support-oriented thinking boosts perceived competence; these effects are moderated by problem type. The findings reveal a tension between transparency and anthropomorphism, offering empirical guidance for designing AI intention-expressing behaviors.
📝 Abstract
People increasingly turn to conversational agents such as ChatGPT to seek guidance for their personal problems. As these systems grow in capability, many now display elements of"thinking": short reflective statements that reveal a model's intentions or values before responding. While initially introduced to promote transparency, such visible thinking can also anthropomorphise the agent and shape user expectations. Yet little is known about how these displays affect user perceptions in help-seeking contexts. We conducted a 3 x 2 mixed design experiment examining the impact of'Thinking Content'(None, Emotionally-Supportive, Expertise-Supportive) and'Conversation Context'(Habit-related vs. Feelings-related problems) on users'perceptions of empathy, warmth, competence, and engagement. Participants interacted with a chatbot that either showed no visible thinking or presented value-oriented reflections prior to its response. Our findings contribute to understanding how thinking transparency influences user experience in supportive dialogues, and offer implications for designing conversational agents that communicate intentions in sensitive, help-seeking scenarios.