🤖 AI Summary
This paper addresses the paradigm shift in software reuse driven by generative AI, identifying a growing trend toward “cargo-cult” development—overreliance on AI-generated code—that undermines code trustworthiness and engineering controllability. Through conceptual analysis, comparative paradigm examination, and interdisciplinary critical review, we propose an “AI-native” software engineering framework that systematically identifies core challenges in AI-driven reuse, including traceability, verifiability, and accountability. We introduce, for the first time, a research agenda for trustworthy AI-human collaboration in software engineering, articulating concrete risk-mitigation strategies and a multi-stage research roadmap. Furthermore, we advocate for responsible AI co-development norms grounded in transparency, auditability, and evolutionary governance. Our work establishes a theoretical foundation and practical guidance for building next-generation software reuse systems that are human-AI collaborative, auditable, and evolutionarily sustainable. (149 words)
📝 Abstract
Software development is currently under a paradigm shift in which artificial intelligence and generative software reuse are taking the center stage in software creation. Consequently, earlier software reuse practices and methods are rapidly being replaced by AI-assisted approaches in which developers place their trust on code that has been generated by artificial intelligence. This is leading to a new form of software reuse that is conceptually not all that different from cargo cult development. In this paper we discuss the implications of AI-assisted generative software reuse in the context of emerging "AI native" software engineering, bring forth relevant questions, and define a tentative research agenda and call to action for tackling some of the central issues associated with this approach.