🤖 AI Summary
Differential testing of deep learning (DL) libraries faces two key bottlenecks: (1) difficulty in automatically identifying semantically equivalent API counterparts across libraries, and (2) insufficient input diversity, hindering effective oracle problem mitigation. To address these, we propose the first LLM-driven framework for API counterpart synthesis. Our approach integrates static analysis to extract path constraints—guiding the generation of diverse, semantically valid inputs—and uniquely incorporates LLM-based semantic understanding of both DL libraries and their upstream dependencies into the differential testing pipeline. Evaluated on TensorFlow and PyTorch, our method achieves an 84% increase in API counterpart coverage, a 7.23% improvement in branch coverage, and an 88% boost in defect detection rate. It uncovered 71 defects, of which 59 were confirmed by developers—including 46 previously unknown bugs, 10 of which have already been fixed in newer releases.
📝 Abstract
Differential testing offers a promising strategy to alleviate the test oracle problem by comparing the test results between alternative implementations. However, existing differential testing techniques for deep learning (DL) libraries are limited by the key challenges of finding alternative implementations (called counterparts) for a given API and subsequently generating diverse test inputs. To address the two challenges, this paper introduces DLLens, an LLM-enhanced differential testing technique for DL libraries. To address the first challenge, DLLens incorporates an LLM-based counterpart synthesis workflow, with the insight that the counterpart of a given DL library API's computation could be successfully synthesized through certain composition and adaptation of the APIs from another DL library. To address the second challenge, DLLens incorporates a static analysis technique that extracts the path constraints from the implementations of a given API and its counterpart to guide diverse test input generation. The extraction is facilitated by LLM's knowledge of the concerned DL library and its upstream libraries. We evaluate DLLens on two popular DL libraries, TensorFlow and PyTorch. Our evaluation shows that DLLens synthesizes counterparts for 1.84 times as many APIs as those found by state-of-the-art techniques on these libraries. Moreover, under the same time budget, DLLens covers 7.23% more branches and detects 1.88 times as many bugs as state-of-the-art techniques on 200 randomly sampled APIs. DLLens has successfully detected 71 bugs in recent TensorFlow and PyTorch libraries. Among them, 59 are confirmed by developers, including 46 confirmed as previously unknown bugs, and 10 of these previously unknown bugs have been fixed in the latest version of TensorFlow and PyTorch.