🤖 AI Summary
This work addresses the limitation of existing Automatic Code Summarization (ACS) approaches, which operate predominantly at the method level and lack support for higher-level code units such as files or modules. To bridge this gap, we propose an LLM-driven summarization framework tailored to high-level code units. Methodologically, we systematically compare three summarization paradigms—full-code input, code simplification, and hierarchical abstraction—and introduce a novel structure-aware hierarchical summarization strategy, complemented by code simplification techniques and hierarchical prompt engineering. We further demonstrate that large language models (LLMs) serve as highly reliable automatic evaluators (Spearman ρ > 0.85). Key contributions include: (1) the first systematic investigation of file- and module-level ACS; (2) significant improvement in module-level summary quality via the hierarchical strategy; and (3) substantial reduction in human evaluation effort through LLM-based assessment. Empirical results show that full-code input is optimal for file-level summarization, whereas the hierarchical strategy achieves superior performance at the module level.
📝 Abstract
Commenting code is a crucial activity in software development, as it aids in facilitating future maintenance and updates. To enhance the efficiency of writing comments and reduce developers' workload, researchers has proposed various automated code summarization (ACS) techniques to automatically generate comments/summaries for given code units. However, these ACS techniques primarily focus on generating summaries for code units at the method level. There is a significant lack of research on summarizing higher-level code units, such as file-level and module-level code units, despite the fact that summaries of these higher-level code units are highly useful for quickly gaining a macro-level understanding of software components and architecture. To fill this gap, in this paper, we conduct a systematic study on how to use LLMs for commenting higher-level code units, including file level and module level. These higher-level units are significantly larger than method-level ones, which poses challenges in handling long code inputs within LLM constraints and maintaining efficiency. To address these issues, we explore various summarization strategies for ACS of higher-level code units, which can be divided into three types: full code summarization, reduced code summarization, and hierarchical code summarization. The experimental results suggest that for summarizing file-level code units, using the full code is the most effective approach, with reduced code serving as a cost-efficient alternative. However, for summarizing module-level code units, hierarchical code summarization becomes the most promising strategy. In addition, inspired by the research on method-level ACS, we also investigate using the LLM as an evaluator to evaluate the quality of summaries of higher-level code units. The experimental results demonstrate that the LLM's evaluation results strongly correlate with human evaluations.