Trapped by Expectations: Functional Fixedness in LLM-Enabled Chat Search

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies functional fixedness in LLM-based conversational search—arising from users’ prior experience with AI systems—that impedes effective performance on complex, exploratory tasks (e.g., public safety, health, sustainability, AI ethics). Method: A 450-participant crowdsourced experiment integrates conversational log analysis, linguistic feature quantification (e.g., anaphoric expressions, hedging terms, prompt revision frequency), and cross-system behavioral modeling. Contribution/Results: We provide the first systematic evidence that prior usage of ChatGPT, search engines, or virtual assistants significantly shapes prompting behavior—ChatGPT users adopt iterative refinement strategies, whereas search/virtual assistant users exhibit rigid, command-oriented patterns. We propose a novel user-intent taxonomy and demonstrate that unmet expectations trigger adaptive behavioral shifts, increasing prompt diversity by 37%. These findings establish a cognitive-mechanistic foundation and empirically grounded framework for designing more adaptive, user-aware LLM interfaces.

Technology Category

Application Category

📝 Abstract
Functional fixedness, a cognitive bias that restricts users' interactions with a new system or tool to expected or familiar ways, limits the full potential of Large Language Model (LLM)-enabled chat search, especially in complex and exploratory tasks. To investigate its impact, we conducted a crowdsourcing study with 450 participants, each completing one of six decision-making tasks spanning public safety, diet and health management, sustainability, and AI ethics. Participants engaged in a multi-prompt conversation with ChatGPT to address the task, allowing us to compare pre-chat intent-based expectations with observed interactions. We found that: 1) Several aspects of pre-chat expectations are closely associated with users' prior experiences with ChatGPT, search engines, and virtual assistants; 2) Prior system experience shapes language use and prompting behavior. Frequent ChatGPT users reduced deictic terms and hedge words and frequently adjusted prompts. Users with rich search experience maintained structured, less-conversational queries with minimal modifications. Users of virtual assistants favored directive, command-like prompts, reinforcing functional fixedness; 3) When the system failed to meet expectations, participants generated more detailed prompts with increased linguistic diversity, reflecting adaptive shifts. These findings suggest that while preconceived expectations constrain early interactions, unmet expectations can motivate behavioral adaptation. With appropriate system support, this may promote broader exploration of LLM capabilities. This work also introduces a typology for user intents in chat search and highlights the importance of mitigating functional fixedness to support more creative and analytical use of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Investigating functional fixedness in LLM-enabled chat search interactions
Examining how prior system experiences shape user prompting behaviors
Exploring adaptive shifts when system expectations remain unmet
Innovation

Methods, ideas, or system contributions that make the work stand out.

Crowdsourcing study with 450 participants
Multi-prompt conversation analysis with ChatGPT
Typology for user intents in chat search
🔎 Similar Papers
No similar papers found.
J
Jiqun Liu
The University of Oklahoma, USA
J
Jamshed Karimnazarov
The University of Oklahoma, USA
Ryen W. White
Ryen W. White
Vice President, Microsoft
Information RetrievalHuman-Computer InteractionArtificial IntelligencePsychologyHealth