🤖 AI Summary
Existing automated program repair (APR) evaluation overly emphasizes warning elimination rates while neglecting critical side effects—including newly introduced defects, functional regressions, and structural degradation. Method: This paper proposes the first security-oriented, multi-dimensional APR evaluation framework that systematically quantifies repair impact across three dimensions: functional correctness (unit test pass rate), code structural quality (SonarQube metrics), and side effects (number of newly introduced violations). Contribution/Results: An empirical study using Sorald on 3,529 SonarQube violations in 2,393 Java code snippets from Stack Overflow reveals that, despite high warning elimination rates, repairs introduced 2,120 new violations, caused 24% of unit tests to fail, and significantly degraded multiple code quality metrics. These findings expose the severe limitations of relying solely on warning elimination as an evaluation criterion and provide both a methodological foundation and empirical evidence for developing more trustworthy and robust APR tools.
📝 Abstract
In supporting the development of high-quality software, especially necessary in the era of LLMs, automated program repair (APR) tools aim to improve code quality by automatically addressing violations detected by static analysis profilers. Previous research tends to evaluate APR tools only for their ability to clear violations, neglecting their potential introduction of new (sometimes severe) violations, changes to code functionality and degrading of code structure. There is thus a need for research to develop and assess comprehensive evaluation frameworks for APR tools. This study addresses this research gap, and evaluates Sorald (a state-of-the-art APR tool) as a proof of concept. Sorald's effectiveness was evaluated in repairing 3,529 SonarQube violations across 30 rules within 2,393 Java code snippets extracted from Stack Overflow. Outcomes show that while Sorald fixes specific rule violations, it introduced 2,120 new faults (32 bugs, 2088 code smells), reduced code functional correctness--as evidenced by a 24% unit test failure rate--and degraded code structure, demonstrating the utility of our framework. Findings emphasize the need for evaluation methodologies that capture the full spectrum of APR tool effects, including side effects, to ensure their safe and effective adoption.