🤖 AI Summary
This work systematically investigates, for the first time, the feasibility of leveraging large language models (LLMs) to automatically generate file-level logging statements in machine learning applications. Addressing prior limitations in logging statement localization accuracy and alignment with project-specific logging conventions, we employ GPT-4o mini with structured prompt engineering to insert logging statements into de-logged Python code. Evaluation combines automated metrics with human validation, assessing insertion location, log level, variable referencing, and natural-language quality. Results show a 63.91% accuracy in correct log placement, yet reveal severe over-logging (82.66%), pervasive redundancy—especially at function entry/exit points—frequent omission within large code blocks, and frequent violations of project logging norms. This study identifies critical bottlenecks in LLM-based log generation, providing empirical grounding and concrete directions for advancing trustworthy, automated logging in ML systems.
📝 Abstract
Logging is essential in software development, helping developers monitor system behavior and aiding in debugging applications. Given the ability of large language models (LLMs) to generate natural language and code, researchers are exploring their potential to generate log statements. However, prior work focuses on evaluating logs introduced in code functions, leaving file-level log generation underexplored -- especially in machine learning (ML) applications, where comprehensive logging can enhance reliability. In this study, we evaluate the capacity of GPT-4o mini as a case study to generate log statements for ML projects at file level. We gathered a set of 171 ML repositories containing 4,073 Python files with at least one log statement. We identified and removed the original logs from the files, prompted the LLM to generate logs for them, and evaluated both the position of the logs and log level, variables, and text quality of the generated logs compared to human-written logs. In addition, we manually analyzed a representative sample of generated logs to identify common patterns and challenges. We find that the LLM introduces logs in the same place as humans in 63.91% of cases, but at the cost of a high overlogging rate of 82.66%. Furthermore, our manual analysis reveals challenges for file-level logging, which shows overlogging at the beginning or end of a function, difficulty logging within large code blocks, and misalignment with project-specific logging conventions. While the LLM shows promise for generating logs for complete files, these limitations remain to be addressed for practical implementation.