Autonomous Legacy Web Application Upgrades Using a Multi-Agent System

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address security and stability risks in legacy web applications stemming from high upgrade costs and outdated technology stacks, this paper proposes a collaborative multi-agent system for legacy code modernization. Built upon large language models (LLMs), the system employs a cross-file context preservation mechanism and a phased task orchestration architecture to enable full-stack, automated migration of the view layer alongside security hardening. Its key innovations include the integration of zero-shot and one-shot prompting, modular task distribution, and rigorous output validation—collectively enhancing the reliability and consistency of LLM-generated artifacts in long-horizon software maintenance tasks. Experimental evaluation demonstrates a 37% reduction in error rate compared to single-LLM baselines, with 92% accuracy on small-file upgrades. The end-to-end, reproducible pipeline is publicly open-sourced.

Technology Category

Application Category

📝 Abstract
The use of Large Language Models (LLMs) for autonomous code generation is gaining attention in emerging technologies. As LLM capabilities expand, they offer new possibilities such as code refactoring, security enhancements, and legacy application upgrades. Many outdated web applications pose security and reliability challenges, yet companies continue using them due to the complexity and cost of upgrades. To address this, we propose an LLM-based multi-agent system that autonomously upgrades legacy web applications to the latest versions. The system distributes tasks across multiple phases, updating all relevant files. To evaluate its effectiveness, we employed Zero-Shot Learning (ZSL) and One-Shot Learning (OSL) prompts, applying identical instructions in both cases. The evaluation involved updating view files and measuring the number and types of errors in the output. For complex tasks, we counted the successfully met requirements. The experiments compared the proposed system with standalone LLM execution, repeated multiple times to account for stochastic behavior. Results indicate that our system maintains context across tasks and agents, improving solution quality over the base model in some cases. This study provides a foundation for future model implementations in legacy code updates. Additionally, findings highlight LLMs' ability to update small outdated files with high precision, even with basic prompts. The source code is publicly available on GitHub: https://github.com/alasalm1/Multi-agent-pipeline.
Problem

Research questions and friction points this paper is trying to address.

Web Application
Automatic Update
Security and Stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Code Updating
Memory-enhanced Processing
🔎 Similar Papers
No similar papers found.