🤖 AI Summary
This study addresses the emergent “vibe coding” phenomenon—intuitive, trial-and-error programming enabled by AI—and systematically investigates its root causes, practitioner experiences, and critical gaps in quality assurance. Using grey literature review, we analyze 518 authentic behavioral instances drawn from 101 practical sources, identifying for the first time a distinct cohort of “fragile developers”: practitioners highly reliant on AI-generated code, proficient in rapid prototyping yet deficient in debugging, validation, and code verification. Our findings expose a fundamental speed–quality trade-off paradox: users consistently underestimate AI code defects, neglect testing, and sacrifice long-term maintainability. This work provides empirical grounding for the growing quality crisis in AI-augmented software development and proposes a novel quality assurance paradigm centered on verifiability and debuggability—informing next-generation tool design.
📝 Abstract
AI code generation tools are transforming software development, especially for novice and non-software developers, by enabling them to write code and build applications faster and with little to no human intervention. Vibe coding is the practice where users rely on AI code generation tools through intuition and trial-and-error without necessarily understanding the underlying code. Despite widespread adoption, no research has systematically investigated why users engage in vibe coding, what they experience while doing so, and how they approach quality assurance (QA) and perceive the quality of the AI-generated code. To this end, we conduct a systematic grey literature review of 101 practitioner sources, extracting 518 firsthand behavioral accounts about vibe coding practices, challenges, and limitations. Our analysis reveals a speed-quality trade-off paradox, where vibe coders are motivated by speed and accessibility, often experiencing rapid ``instant success and flow'', yet most perceive the resulting code as fast but flawed. QA practices are frequently overlooked, with many skipping testing, relying on the models' or tools' outputs without modification, or delegating checks back to the AI code generation tools. This creates a new class of vulnerable software developers, particularly those who build a product but are unable to debug it when issues arise. We argue that vibe coding lowers barriers and accelerates prototyping, but at the cost of reliability and maintainability. These insights carry implications for tool designers and software development teams. Understanding how vibe coding is practiced today is crucial for guiding its responsible use and preventing a broader QA crisis in AI-assisted development.