🤖 AI Summary
This work addresses the challenge of poor generalizability in dynamically inferred specifications, which often stems from insufficient test coverage and necessitates extensive manual filtering. To mitigate this issue, the study introduces a novel integration of large language model (LLM)-generated counterexample tests into the dynamic inference pipeline. By leveraging tools such as SpecFuzzer to automatically validate the inferred assertions, the approach significantly improves precision without compromising recall. Experimental results demonstrate that the method effectively eliminates up to 11.68% of invalid assertions, achieving a maximum precision gain of 7% in specification inference. This advancement enhances both the accuracy and automation level of dynamic specification inference, reducing reliance on human intervention while maintaining robustness in inferred program specifications.
📝 Abstract
Contract assertions, such as preconditions, postconditions, and invariants, play a crucial role in software development, enabling applications such as program verification, test generation, and debugging. Despite their benefits, the adoption of contract assertions is limited, due to the difficulty of manually producing such assertions. Dynamic analysis-based approaches, such as Daikon, can aid in this task by inferring expressive assertions from execution traces. However, a fundamental weakness of these methods is their reliance on the thoroughness of the test suites used for dynamic analysis. When these test suites do not contain sufficiently diverse tests, the inferred assertions are often not generalizable, leading to a high rate of invalid candidates (false positives) that must be manually filtered out.
In this paper, we explore the use of large language models (LLMs) to automatically generate tests that attempt to invalidate generated assertions. Our results show that state-of-the-art LLMs can generate effective counterexamples that help to discard up to 11.68\% of invalid assertions inferred by SpecFuzzer. Moreover, when incorporating these LLM-generated counterexamples into the dynamic analysis process, we observe an improvement of up to 7\% in precision of the inferred specifications, with respect to the ground-truths gathered from the evaluation benchmarks, without affecting recall.