🤖 AI Summary
Code summarization has long been confined to the function level, neglecting higher-level contextual abstractions such as classes and entire code repositories. Method: This work formally introduces two novel tasks—class-level and repository-level code summarization—and constructs the first structured, multi-granularity benchmark incorporating real-world class- and repository-level context. We propose a structured context enhancement mechanism integrating retrieval-augmented generation (RAG) and few-shot learning to boost large language model summarization capabilities, and employ multidimensional evaluation metrics including SIDE, BLEURT, METEOR, and BLEU-4. Contribution/Results: Fine-tuned CodeT5+ achieves state-of-the-art performance; Deepseek-Coder-1.3B and StarCoder2-15B show significant improvements across BLEURT, METEOR, and BLEU-4; repository-level summarization demonstrates initial promise but demands strong structural modeling and substantial computational resources. All data, code, and experimental results are publicly released.
📝 Abstract
Code summarization is a critical task in natural language processing and software engineering, which aims to generate concise descriptions of source code. Recent advancements have improved the quality of these summaries, enhancing code readability and maintainability. However, the content of a repository or a class has not been considered in function code summarization. This study investigated the effectiveness of code summarization models beyond the function level, exploring the impact of class and repository contexts on the summary quality. The study involved revising benchmarks for evaluating models at class and repository levels, assessing baseline models, and evaluating LLMs with in-context learning to determine the enhancement of summary quality with additional context. The findings revealed that the fine-tuned state-of-the-art CodeT5+ base model excelled in code summarization, while incorporating few-shot learning and retrieved code chunks from RAG significantly enhanced the performance of LLMs in this task. Notably, the Deepseek Coder 1.3B and Starcoder2 15B models demonstrated substantial improvements in metrics such as BLEURT, METEOR, and BLEU-4 at both class and repository levels. Repository-level summarization exhibited promising potential but necessitates significant computational resources and gains from the inclusion of structured context. Lastly, we employed the recent SIDE code summarization metric in our evaluation. This study contributes to refining strategies for prompt engineering, few-shot learning, and RAG, addressing gaps in benchmarks for code summarization at various levels. Finally, we publish all study details, code, datasets, and results of evaluation in the GitHub repository available at https://github.com/kilimanj4r0/code-summarization-beyond-function-level.