Reducing False Positives in Static Bug Detection with LLMs: An Empirical Study in Industry

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high false positive rates of industrial-scale static analysis tools, which impose substantial manual review costs. Conducting an empirical evaluation within Tencent’s advertising and marketing service system, we present the first systematic assessment of large language model (LLM)-based false positive filtering approaches in a large enterprise setting. We propose a hybrid strategy that integrates LLMs with static analysis through context-aware prompting, deep code semantic understanding, and rule fusion to effectively distinguish false alarms. Experimental results demonstrate that our method eliminates 94%–98% of false positives while maintaining high recall, achieving a per-alert processing time of just 2.1 seconds at a cost of only $0.0011—significantly outperforming manual review. These findings validate both the practical potential and inherent limitations of LLMs in real-world industrial applications.

Technology Category

Application Category

📝 Abstract
Static analysis tools (SATs) are widely adopted in both academia and industry for improving software quality, yet their practical use is often hindered by high false positive rates, especially in large-scale enterprise systems. These false alarms demand substantial manual inspection, creating severe inefficiencies in industrial code review. While recent work has demonstrated the potential of large language models (LLMs) for false alarm reduction on open-source benchmarks, their effectiveness in real-world enterprise settings remains unclear. To bridge this gap, we conduct the first comprehensive empirical study of diverse LLM-based false alarm reduction techniques in an industrial context at Tencent, one of the largest IT companies in China. Using data from Tencent's enterprise-customized SAT on its large-scale Advertising and Marketing Services software, we construct a dataset of 433 alarms (328 false positives, 105 true positives) covering three common bug types. Through interviewing developers and analyzing the data, our results highlight the prevalence of false positives, which wastes substantial manual effort (e.g., 10-20 minutes of manual inspection per alarm). Meanwhile, our results show the huge potential of LLMs for reducing false alarms in industrial settings (e.g., hybrid techniques of LLM and static analysis eliminate 94-98% of false positives with high recall). Furthermore, LLM-based techniques are cost-effective, with per-alarm costs as low as 2.1-109.5 seconds and $0.0011-$0.12, representing orders-of-magnitude savings compared to manual review. Finally, our case analysis further identifies key limitations of LLM-based false alarm reduction in industrial settings.
Problem

Research questions and friction points this paper is trying to address.

false positives
static bug detection
industrial software
code review
software quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based false alarm reduction
static analysis
empirical study
industrial software
false positive mitigation