Just-in-Time Catching Test Generation at Meta

πŸ“… 2026-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the inefficiency of pre-merge defect detection and the high burden of false positives in large-scale backend systems by introducing an "immediate-capture testing" paradigm. The approach dynamically generates high-precision expected-failure tests informed by code change context and features a novel evaluation mechanism that synergistically combines a rule engine with a large language model (LLM) to efficiently prioritize high-value test candidates. Experimental results demonstrate that the proposed method achieves a 4Γ— improvement in defect detection efficiency over traditional fortification testing and a 20Γ— gain compared to flaky-test-based approaches, while reducing manual review effort by 70%. Furthermore, it successfully identified eight real-world defects, four of which could trigger severe production incidents.

Technology Category

Application Category

πŸ“ Abstract
We report on Just-in-Time catching test generation at Meta, designed to prevent bugs in large scale backend systems of hundreds of millions of line of code. Unlike traditional hardening tests, which pass at generation time, catching tests are meant to fail, surfacing bugs before code lands. The primary challenge is to reduce development drag from false positive test failures. Analyzing 22,126 generated tests, we show code-change-aware methods improve candidate catch generation 4x over hardening tests and 20x over coincidentally failing tests. To address false positives, we use rule-based and LLM-based assessors. These assessors reduce human review load by 70%. Inferential statistical analysis showed that human-accepted code changes are assessed to have significantly more false positives, while human-rejected changes have significantly more true positives. We reported 41 candidate catches to engineers; 8 were confirmed to be true positives, 4 of which would have led to serious failures had they remained uncaught. Overall, our results show that Just-in-Time catching is scalable, industrially applicable, and that it prevents serious failures from reaching production.
Problem

Research questions and friction points this paper is trying to address.

Just-in-Time testing
catching tests
false positives
bug prevention
large-scale systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Just-in-Time Testing
Catching Tests
Code-Change-Aware
False Positive Reduction
LLM-Based Assessment
M
Matthew Becker
Meta Platforms, London UK, Seattle and California, USA
Y
Yifei Chen
Meta Platforms, London UK, Seattle and California, USA
N
Nicholas Cochran
Meta Platforms, London UK, Seattle and California, USA
P
Pouyan Ghasemi
Meta Platforms, London UK, Seattle and California, USA
A
Abhishek Gulati
Meta Platforms, London UK, Seattle and California, USA
Mark Harman
Mark Harman
Research Scientist at Meta & Professor of Software Engineering at UCL
SBSESoftware TestingEvolutionary ComputationProgram AnalysisSoftware Engineering
Z
Zachary Haluza
Meta Platforms, London UK, Seattle and California, USA
M
Mehrdad Honarkhah
Meta Platforms, London UK, Seattle and California, USA
H
HervΓ© Robert
Meta Platforms, London UK, Seattle and California, USA
J
Jiacheng Liu
Meta Platforms, London UK, Seattle and California, USA
W
Weini Liu
Meta Platforms, London UK, Seattle and California, USA
S
Sreeja Thummala
Meta Platforms, London UK, Seattle and California, USA
X
Xiaoning Yang
Meta Platforms, London UK, Seattle and California, USA
Rui Xin
Rui Xin
Research Scientist, Meta
deep learningmachine learning
S
Sophie Zeng
Meta Platforms, London UK, Seattle and California, USA