🤖 AI Summary
This paper addresses the low efficiency in specification authoring, code generation, and iterative debugging during API-first development of RESTful microservices. We propose a multi-agent collaborative framework powered by large language models (LLMs), which decouples OpenAPI specification generation, server-side code synthesis, and feedback-driven refinement—guided by execution logs and error messages—into specialized, interoperable agents. This architecture establishes a closed-loop iterative optimization process, substantially enhancing LLMs’ capability to detect and rectify semantic inconsistencies and runtime defects. Evaluated on the PRAB benchmark, our approach generates functionally complete, business-logic-consistent service code in a single pass for small-to-medium-complexity APIs, reducing average iteration count by 62%. Results demonstrate significant improvements in development efficiency, functional correctness, and system robustness.
📝 Abstract
This paper presents a system that uses Large Language Models (LLMs)-based agents to automate the API-first development of RESTful microservices. This system helps to create an OpenAPI specification, generate server code from it, and refine the code through a feedback loop that analyzes execution logs and error messages. The integration of log analysis enables the LLM to detect and address issues efficiently, reducing the number of iterations required to produce functional and robust services. This study's main goal is to advance API-first development automation for RESTful web services and test the capability of LLM-based multi-agent systems in supporting the API-first development approach. To test the proposed system's potential, we utilized the PRAB benchmark. The results indicate that if we keep the OpenAPI specification small and focused, LLMs are capable of generating complete functional code with business logic that aligns to the specification. The code for the system is publicly available at https://github.com/sirbh/code-gen