User Misconceptions of LLM-Based Conversational Programming Assistants

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates programmers’ functional misconceptions regarding LLM-powered conversational programming assistants—particularly erroneous expectations about web access, code execution, non-textual output, and underlying conceptual misunderstandings revealed during debugging, verification, and optimization. We employ a two-phase mixed-method approach: first generating misconception hypotheses via structured brainstorming, then conducting qualitative empirical analysis on publicly available, naturally occurring Python dialogues. Our analysis identifies and validates six high-frequency misconception patterns, demonstrating their adverse effects—including overreliance, inefficient interaction, and failure in quality assurance. The primary contributions are (1) the first empirically grounded taxonomy of user misconceptions specific to LLM-based programming assistants, and (2) a set of “capability-explicitness”-oriented design principles for assistant tools. These findings provide both theoretical foundations and practical guidelines for enhancing the reliability and effectiveness of human-AI collaborative programming. (149 words)

Technology Category

Application Category

📝 Abstract
Programming assistants powered by large language models (LLMs) have become widely available, with conversational assistants like ChatGPT proving particularly accessible to less experienced programmers. However, the varied capabilities of these tools across model versions and the mixed availability of extensions that enable web search, code execution, or retrieval-augmented generation create opportunities for user misconceptions about what systems can and cannot do. Such misconceptions may lead to over-reliance, unproductive practices, or insufficient quality control in LLM-assisted programming. Here, we aim to characterize misconceptions that users of conversational LLM-based assistants may have in programming contexts. Using a two-phase approach, we first brainstorm and catalog user misconceptions that may occur, and then conduct a qualitative analysis to examine whether these conceptual issues surface in naturalistic Python-programming conversations with an LLM-based chatbot drawn from an openly available dataset. Indeed, we see evidence that some users have misplaced expectations about the availability of LLM-based chatbot features like web access, code execution, or non-text output generation. We also see potential evidence for deeper conceptual issues around the scope of information required to debug, validate, and optimize programs. Our findings reinforce the need for designing LLM-based tools that more clearly communicate their programming capabilities to users.
Problem

Research questions and friction points this paper is trying to address.

Characterize user misconceptions of LLM-based programming assistants
Examine misconceptions in naturalistic programming conversations with chatbots
Identify misplaced user expectations about chatbot features and capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cataloging user misconceptions through brainstorming
Analyzing naturalistic programming conversations qualitatively
Designing tools to communicate capabilities clearly
🔎 Similar Papers
No similar papers found.