Reasoning About Reasoning: Towards Informed and Reflective Use of LLM Reasoning in HCI

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a pervasive “decontextualized” misinterpretation of large language models’ (LLMs) reasoning capabilities in human–computer interaction (HCI)—reducing them to black-box tools while neglecting their socio-technical embeddedness and dynamic interplay with human cognition. Method: Drawing on a systematic review of 258 CHI papers, the study advances a novel theoretical framing: “LLM reasoning is a socio-technical artifact co-produced by humans, models, and context.” It then designs a reflective prompting framework to support practitioners in critically deconstructing, iteratively debugging, and collaboratively orchestrating LLM reasoning within situated practice. The approach integrates NLP, machine learning, and HCI theories, prioritizing interdisciplinary synthesis and empirical grounding. Contribution/Results: The work exposes cognitive biases and structural limitations in current LLM reasoning research and delivers an actionable, reflection-oriented practice pathway—advancing a responsible, embodied paradigm for LLM integration in HCI.

Technology Category

Application Category

📝 Abstract
Reasoning is a distinctive human-like characteristic attributed to LLMs in HCI due to their ability to simulate various human-level tasks. However, this work argues that the reasoning behavior of LLMs in HCI is often decontextualized from the underlying mechanics and subjective decisions that condition the emergence and human interpretation of this behavior. Through a systematic survey of 258 CHI papers from 2020-2025 on LLMs, we discuss how HCI hardly perceives LLM reasoning as a product of sociotechnical orchestration and often references it as an object of application. We argue that such abstraction leads to oversimplification of reasoning methodologies from NLP/ML and results in a distortion of LLMs' empirically studied capabilities and (un)known limitations. Finally, drawing on literature from both NLP/ML and HCI, as a constructive step forward, we develop reflection prompts to support HCI practitioners engage with LLM reasoning in an informed and reflective way.
Problem

Research questions and friction points this paper is trying to address.

Analyzing LLM reasoning decontextualization in HCI applications
Addressing oversimplification of NLP/ML methodologies in HCI research
Developing reflective prompts for informed LLM reasoning usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed reflection prompts for informed LLM reasoning use
Analyzed 258 CHI papers on LLM reasoning methodologies
Bridged NLP/ML and HCI perspectives on reasoning behavior
🔎 Similar Papers
No similar papers found.