🤖 AI Summary
Programmers often manually triage large volumes of static analysis alerts, struggling to identify underlying correlations and common patterns—leading to inefficient root-cause localization. This paper proposes an active-feedback-driven rule inference method that integrates active learning, structural similarity analysis, and interactive visualization to automatically infer interpretable summary rules from code features such as call chains and field accesses. The approach supports user-defined clustering and dynamic pattern refinement. Evaluated on two Java projects, it enables developers to achieve rule alignment with only 11.8 interactions on average, significantly improving alert comprehension efficiency and diagnostic confidence. Its core contribution is the first incorporation of active feedback into alert summarization—enabling personalized, evolvable, and interactive alert understanding.
📝 Abstract
Programmers using bug-finding tools often review their reported warnings one by one. Based on the insight that identifying recurring themes and relationships can enhance the cognitive process of sensemaking, we propose CLARITY, which supports interpreting tool-generated warnings through interactive inquiry. CLARITY derives summary rules for custom grouping of related warnings with active feedback. As users mark warnings as interesting or uninteresting, CLARITY's rule inference algorithm surfaces common symptoms, highlighting structural similarities in containment, subtyping, invoked methods, accessed fields, and expressions.
We demonstrate CLARITY on Infer and SpotBugs warnings across two mature Java projects. In a within-subject user study with 14 participants, users articulated root causes for similar uninteresting warnings faster and with more confidence using CLARITY. We observed significant individual variation in desired grouping, reinforcing the need for customizable sensemaking. Simulation shows that with rule-level feedback, only 11.8 interactions are needed on average to align all inferred rules with a simulated user's labels (vs. 17.8 without). Our evaluation suggests that CLARITY's active learning-based summarization enhances interactive warning sensemaking.