Logging Like Humans for LLMs: Rethinking Logging via Execution and Runtime Feedback

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing log generation approaches rely on static analysis and evaluate output quality primarily through textual similarity, which fails to meet the practical demands of downstream debugging tasks for large language models (LLMs). This work proposes ReLog, a novel framework that reframes log generation as a runtime-guided, task-oriented iterative process. By integrating LLMs with program execution feedback and compilation-based repair techniques, ReLog generates highly actionable logs in both source-available and source-unavailable scenarios. Departing from conventional textual similarity metrics, ReLog evaluates log utility based on their actual effectiveness in fault localization and repair. On the Defects4J benchmark, ReLog achieves a direct-debugging F1 score of 0.520—successfully repairing 97 defects—and an indirect-setting F1 of 0.408, substantially outperforming existing methods.
📝 Abstract
Logging statements are essential for software debugging and maintenance. However, existing approaches to automatic logging generation rely on static analysis and produce statements in a single pass without considering runtime behavior. They are also typically evaluated by similarity to developer-written logs, assuming these logs form an adequate gold standard. This assumption is increasingly limiting in the LLM era, where logs are consumed not only by developers but also by LLMs for downstream tasks. As a result, optimizing logs for human similarity does not necessarily reflect their practical utility. To address these limitations, we introduce ReLog, an iterative logging generation framework guided by runtime feedback. ReLog leverages LLMs to generate, execute, evaluate, and refine logging statements so that runtime logs better support downstream tasks. Instead of comparing against developer-written logs, we evaluate ReLog through downstream debugging tasks, including defect localization and repair. We construct a benchmark based on Defects4J under both direct and indirect debugging settings. Results show that ReLog consistently outperforms all baselines, achieving an F1 score of 0.520 and repairing 97 defects in the direct setting, and the best F1 score of 0.408 in the indirect setting where source code is unavailable. Additional experiments across multiple LLMs demonstrate the generality of the framework, while ablations confirm the importance of iterative refinement and compilation repair. Overall, our work reframes logging as a runtime-guided, task-oriented process and advocates evaluating logs by their downstream utility rather than textual similarity.
Problem

Research questions and friction points this paper is trying to address.

logging generation
runtime feedback
LLM consumption
downstream utility
software debugging
Innovation

Methods, ideas, or system contributions that make the work stand out.

runtime feedback
iterative logging generation
LLM-guided debugging
task-oriented logging
downstream utility evaluation
🔎 Similar Papers
No similar papers found.