"My productivity is boosted, but ..." Demystifying Users' Perception on AI Coding Assistants

📅 2025-08-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite the rapid proliferation of AI-powered programming assistants (e.g., GitHub Copilot), empirical understanding of developer needs and pain points in real-world development remains limited. Method: We conducted the first large-scale, manually annotated, fine-grained theme–sentiment analysis of 12,000 user reviews from 32 mainstream AI extensions on the VS Code Marketplace—selected from 1,085 tools (90% released within the past two years). Integrating qualitative coding with marketplace statistics, we constructed a taxonomy capturing functional satisfaction, performance bottlenecks, and usage expectations. Contribution/Results: Our analysis identifies four critical dimensions prioritized by developers: intelligence, contextual awareness, customizability, and resource efficiency. We derive five evidence-based practical implications, revealing significant gaps in current tools’ ability to comprehend complex logic, adapt to personalized workflows, and operate with low computational overhead—thereby providing both empirical grounding and a theoretical framework for designing next-generation AI programming assistants.

Technology Category

Application Category

📝 Abstract
This paper aims to explore fundamental questions in the era when AI coding assistants like GitHub Copilot are widely adopted: what do developers truly value and criticize in AI coding assistants, and what does this reveal about their needs and expectations in real-world software development? Unlike previous studies that conduct observational research in controlled and simulated environments, we analyze extensive, first-hand user reviews of AI coding assistants, which capture developers' authentic perspectives and experiences drawn directly from their actual day-to-day work contexts. We identify 1,085 AI coding assistants from the Visual Studio Code Marketplace. Although they only account for 1.64% of all extensions, we observe a surge in these assistants: over 90% of them are released within the past two years. We then manually analyze the user reviews sampled from 32 AI coding assistants that have sufficient installations and reviews to construct a comprehensive taxonomy of user concerns and feedback about these assistants. We manually annotate each review's attitude when mentioning certain aspects of coding assistants, yielding nuanced insights into user satisfaction and dissatisfaction regarding specific features, concerns, and overall tool performance. Built on top of the findings-including how users demand not just intelligent suggestions but also context-aware, customizable, and resource-efficient interactions-we propose five practical implications and suggestions to guide the enhancement of AI coding assistants that satisfy user needs.
Problem

Research questions and friction points this paper is trying to address.

Explore developers' values and criticisms of AI coding assistants
Analyze real-world user reviews for authentic developer perspectives
Propose enhancements for context-aware and efficient AI coding tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing first-hand user reviews
Manual annotation of review attitudes
Proposing context-aware customizable solutions
🔎 Similar Papers
No similar papers found.
Yunbo Lyu
Yunbo Lyu
PhD Candidate, Singapore Management University
Software Engineering
Z
Zhou Yang
University of Alberta
Jieke Shi
Jieke Shi
PhD Candidate & Research Engineer, Singapore Management University
Software EngineeringAI Software Testing
J
Jianming Chang
Southeast University
Y
Yue Liu
Singapore Management University
D
David Lo
Singapore Management University