🤖 AI Summary
This study investigates the applicability of AI-powered programming assistants in real-world enterprise software projects and their impact on software engineering workflows and developer experience. Drawing on a survey of 57 developers with diverse backgrounds, a systematic review of 35 existing user studies, and semi-structured interviews complemented by literature analysis, this work presents the first empirical investigation—combining qualitative and quantitative methods—specifically focused on enterprise-level deployment contexts. The research formulates a core requirements framework for AI programming assistants grounded in practical engineering needs, uncovering critical challenges in current deployments and articulating key user expectations. These findings offer empirically grounded insights and clear guidance for the design of developer tools and the optimization of underlying AI models.
📝 Abstract
The rise of large language models (LLMs) has accelerated the development of automated techniques and tools for supporting various software engineering tasks, e.g., program understanding, code generation, software testing, and program repair. As CodeLLMs are being employed toward automating these tasks, one question that arises, especially in enterprise settings, is whether these coding assistants and the code LLMs that power them are ready for real-world projects and enterprise use cases, and how do they impact the existing software engineering process and user experience. In this paper we survey 57 developers from different domains and with varying software engineering skill about their experience with AI coding assistants and CodeLLMs. We also reviewed 35 user surveys on the usage, experience and expectations of professionals and students using AI coding assistants and CodeLLMs. Based on our study findings and analysis of existing surveys, we discuss the requirements for AI-powered coding assistants.