🤖 AI Summary
Large language models (LLMs) struggle to generate structurally sound, maintainable code for complex application-level software. To address this, we propose KGACG—a knowledge-guided, multi-agent code generation framework. KGACG comprises three collaborative agents: code organization planner, coder, and tester. It integrates domain knowledge injection and an execution-feedback loop to enable end-to-end automation—from requirements and architectural design to executable, integrated code. Unlike conventional single-agent or knowledge-agnostic approaches, KGACG significantly improves module cohesion, interface consistency, and long-term maintainability. Empirical evaluation on a full-stack Java implementation of the “Tank Battle” game demonstrates successful automated code generation, integration, and validation, confirming KGACG’s effectiveness and robustness in complex application development.
📝 Abstract
Automated code generation driven by Large Lan- guage Models (LLMs) has enhanced development efficiency, yet generating complex application-level software code remains challenging. Multi-agent frameworks show potential, but existing methods perform inadequately in large-scale application-level software code generation, failing to ensure reasonable orga- nizational structures of project code and making it difficult to maintain the code generation process. To address this, this paper envisions a Knowledge-Guided Application-Level Code Generation framework named KGACG, which aims to trans- form software requirements specification and architectural design document into executable code through a collaborative closed- loop of the Code Organization & Planning Agent (COPA), Coding Agent (CA), and Testing Agent (TA), combined with a feedback mechanism. We demonstrate the collaborative process of the agents in KGACG in a Java Tank Battle game case study while facing challenges. KGACG is dedicated to advancing the automation of application-level software development.