🤖 AI Summary
This study addresses the theoretical gap in “perceived shared understanding” (PSU) within human–AI interaction—existing PSU frameworks, derived from human–human communication, fail to capture AI-specific cognitive constraints and behavioral logic. Through a large-scale online survey and inductive thematic analysis, we systematically identify and define eight core dimensions of PSU in human–large language model interaction, including Fluency, Aligned Operation, and Contextual Awareness. Our findings reveal AI-specific determinants such as computational constraints, non-anthropomorphic agency, and response unpredictability. This work transcends traditional interpersonal PSU paradigms by establishing the first empirically grounded, theory-driven framework for assessing and designing trustworthy, interpretable AI systems—providing both a conceptual foundation and a structured measurement basis for PSU in human–AI contexts.
📝 Abstract
Shared understanding plays a key role in the effective communication in and performance of human-human interactions. With the increasingly common integration of AI into human contexts, the future of personal and workplace interactions will likely see human-AI interaction (HAII) in which the perception of shared understanding is important. Existing literature has addressed the processes and effects of PSU in human-human interactions, but the construal remains underexplored in HAII. To better understand PSU in HAII, we conducted an online survey to collect user reflections on interactions with a large language model when it sunderstanding of a situation was thought to be similar to or different from the participant's. Through inductive thematic analysis, we identified eight dimensions comprising PSU in human-AI interactions: Fluency, aligned operation, fluidity, outcome satisfaction, contextual awareness, lack of humanlike abilities, computational limits, and suspicion.