Thinking Makes LLM Agents Introverted: How Mandatory Thinking Can Backfire in User-Engaged Agents

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study demonstrates that enforced reasoning strategies—such as chain-of-thought prompting—can be counterproductive in user-interactive large language model agents, leading to degraded performance and reduced information output. Through extensive experiments across seven models and three benchmarks, complemented by quantitative response categorization and qualitative failure analysis, the work reveals for the first time that mandatory reasoning induces an “introverted” behavior in agents, thereby impairing human–AI interaction efficacy. To address this limitation, the paper introduces the concept of “information transparency awareness” as a novel design principle for reasoning agents and shows that proactively prompting agents to disclose relevant information significantly enhances their performance across multiple models in interactive tasks.

Technology Category

Application Category

📝 Abstract
Eliciting reasoning has emerged as a powerful technique for improving the performance of large language models (LLMs) on complex tasks by inducing thinking. However, their effectiveness in realistic user-engaged agent scenarios remains unclear. In this paper, we conduct a comprehensive study on the effect of explicit thinking in user-engaged LLM agents. Our experiments span across seven models, three benchmarks, and two thinking instantiations, and we evaluate them through both a quantitative response taxonomy analysis and qualitative failure propagation case studies. Contrary to expectations, we find that mandatory thinking often backfires on agents in user-engaged settings, causing anomalous performance degradation across various LLMs. Our key finding reveals that thinking makes agents more ``introverted''by shortening responses and reducing information disclosure to users, which weakens agent-user information exchange and leads to downstream task failures. Furthermore, we demonstrate that explicitly prompting for information disclosure reliably improves performance across diverse model families, suggesting that proactive transparency is a vital lever for agent optimization. Overall, our study suggests that information transparency awareness is a crucial yet underexplored perspective for the future design of reasoning agents in real-world scenarios. Our code is available at https://github.com/deeplearning-wisc/Thinking-Agent.
Problem

Research questions and friction points this paper is trying to address.

large language models
reasoning
user engagement
information disclosure
agent transparency
Innovation

Methods, ideas, or system contributions that make the work stand out.

explicit reasoning
user-engaged agents
information disclosure
thinking-induced introversion
agent transparency
🔎 Similar Papers
No similar papers found.