🤖 AI Summary
Static analysis rule specifications often suffer from inconsistencies due to insufficient testing. To address this, we propose StaAgent—the first automated testing framework that integrates large language models (e.g., GPT-4o, Qwen) with a multi-agent collaboration paradigm for rule-oriented mutation testing. StaAgent jointly generates defect-inducing seed programs and their semantically equivalent variants to systematically validate rule behavior. It comprises four specialized agents—code generation, verification, mutation, and evaluation—that collectively enable end-to-end, meta-heuristic rule validation. Evaluated across five widely used static analyzers, StaAgent uncovered 64 rule defects, 53 of which are novel issues undetectable by existing methods; several have already been confirmed and patched by tool maintainers. This work pioneers the application of LLM-powered multi-agent systems to static analyzer rule validation, significantly improving test coverage, scalability, and defect detection capability.
📝 Abstract
Static analyzers play a critical role in identifying bugs early in the software development lifecycle, but their rule implementations are often under-tested and prone to inconsistencies. To address this, we propose StaAgent, an agentic framework that harnesses the generative capabilities of Large Language Models (LLMs) to systematically evaluate static analyzer rules. StaAgent comprises four specialized agents: a Seed Generation Agent that translates bug detection rules into concrete, bug-inducing seed programs; a Code Validation Agent that ensures the correctness of these seeds; a Mutation Generation Agent that produces semantically equivalent mutants; and an Analyzer Evaluation Agent that performs metamorphic testing by comparing the static analyzer's behavior on seeds and their corresponding mutants. By revealing inconsistent behaviors, StaAgent helps uncover flaws in rule implementations. This LLM-driven, multi-agent framework offers a scalable and adaptable solution to improve the reliability of static analyzers. We evaluated StaAgent with five state-of-the-art LLMs (CodeL-lama, DeepSeek, Codestral, Qwen, and GPT-4o) across five widely used static analyzers (SpotBugs, SonarQube, ErrorProne, Infer, and PMD). The experimental results show that our approach can help reveal 64 problematic rules in the latest versions of these five static analyzers (i.e., 28 in SpotBugs, 18 in SonarQube, 6 in ErrorProne, 4 in Infer, and 8 in PMD). In addition, 53 out of the 64 bugs cannot be detected by the SOTA baseline. We have reported all the bugs to developers, with two of them already fixed. Three more have been confirmed by developers, while the rest are awaiting response. These results demonstrate the effectiveness of our approach and underscore the promise of agentic, LLM-driven data synthesis to advance software engineering.