Enhancing COBOL Code Explanations: A Multi-Agents Approach Using Large Language Models

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor comprehensibility of COBOL code—stemming from scarce documentation, idiosyncratic syntax, and excessive sequence length exceeding LLM context windows—this paper proposes a dual-agent large language model framework. One agent performs fine-grained code parsing and generates context-aware prompts; the other synthesizes functional explanations across hierarchical levels (procedure, file, and project). This approach overcomes single-model token limitations by integrating heterogeneous contextual signals, enabling end-to-end semantic interpretation of legacy COBOL systems. Evaluated on 14 real-world industrial COBOL projects, the method achieves a 18.59% BLEU score improvement at the procedure level, a 14.68-point gain in chrF at the file level, and successfully uncovers the overarching functionality and business purpose for 82% of systems at the project level. These results demonstrate substantial gains in maintainability and semantic transparency of legacy COBOL applications.

Technology Category

Application Category

📝 Abstract
Common Business Oriented Language (COBOL) is a programming language used to develop business applications that are widely adopted by financial, business, and government agencies. Due to its age, complexity, and declining number of COBOL developers, maintaining COBOL codebases is becoming increasingly challenging. In particular, the lack of documentation makes it difficult for new developers to effectively understand and maintain COBOL systems. Existing research utilizes large language models (LLMs) to explain the functionality of code snippets. However, COBOL presents unique challenges due to its architectural and syntactical differences, which often cause its code to exceed the token window size of LLMs. In this work, we propose a multi-agent approach that leverages two LLM-based agents working collaboratively to generate explanations for functions, files, and the overall project. These agents incorporate together by utilizing contextual information from the codebase into the code explanation prompts. We evaluate the effectiveness of our approach using 14 open-source, real-world COBOL projects. Our results indicate that our approach performs significantly better than the baseline in function code explanation, with improvements of 12.67%, 18.59%, and 0.62% in terms of METEOR, chrF, and SentenceBERT scores, respectively. At the file level, our approach effectively explains both short and long COBOL files that exceed the token window size of LLMs and surpass the baseline by 4.21%, 10.72%, and 14.68% in explaining the purpose, functionality, and clarity of the generated explanation. At the project level, our approach generates explanations that convey the functionality and purpose of 82% of the selected projects.
Problem

Research questions and friction points this paper is trying to address.

Addressing COBOL code maintenance challenges due to outdated documentation
Overcoming LLM token limits for explaining large COBOL codebases
Improving automated COBOL code explanations using multi-agent LLM collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent approach with two LLMs
Handles COBOL code exceeding token limits
Improves explanation accuracy significantly
🔎 Similar Papers
No similar papers found.