🤖 AI Summary
This study addresses the limited understanding of how AI-powered programming assistants reshape coding practices in real-world, long-term software development. Leveraging a mixed-methods approach, it systematically investigates the longitudinal impact of AI tools on developer productivity, code quality, editing behaviors, code reuse, and context switching by integrating fine-grained IDE telemetry from 800 developers over two years with survey responses from 62 practitioners. This work presents the first large-scale, longitudinal telemetry-based evidence of the subtle yet profound behavioral shifts induced by AI assistants, overcoming the constraints of prior studies reliant on short-term experiments or subjective self-reports. Findings reveal that while AI users exhibit significantly higher code output, they also delete substantially more code; although developers perceive enhanced productivity, they consistently underestimate the extent of behavioral changes across other dimensions.
📝 Abstract
AI-powered coding assistants are rapidly becoming fixtures in professional IDEs, yet their sustained influence on everyday development remains poorly understood. Prior research has focused on short-term use or self-reported perceptions, leaving open questions about how sustained AI use reshapes actual daily coding practices in the long term. We address this gap with a mixed-method study of AI adoption in IDEs, combining longitudinal two-year fine-grained telemetry from 800 developers with a survey of 62 professionals. We analyze five dimensions of workflow change: productivity, code quality, code editing, code reuse, and context switching. Telemetry reveals that AI users produce substantially more code but also delete significantly more. Meanwhile, survey respondents report productivity gains and perceive minimal changes in other dimensions. Our results offer empirical insights into the silent restructuring of software workflows and provide implications for designing future AI-augmented tooling.