🤖 AI Summary
This work addresses the inefficiency and limited scalability of inductive generalization in IC3/PDR-based hardware model checking, which stems from repeated clause-level graph analyses. To overcome this, the authors propose LeGend, a novel framework that introduces one-time global representation learning into lemma generation for the first time. LeGend employs domain-adaptive self-supervised pretraining to learn latch embeddings that capture global circuit characteristics, coupled with a lightweight prediction model to enable efficient lemma generation. By decoupling the expensive representation learning phase from fast inference, LeGend significantly accelerates two mainstream IC3/PDR engines across a diverse set of benchmarks, demonstrating both effectiveness and strong scalability.
📝 Abstract
Property checking of RTL designs is a central task in formal verification. Among available engines, IC3/PDR is a widely used backbone whose performance critically depends on inductive generalization, the step that generalizes a concrete counterexample-to-induction (CTI) cube into a lemma. Prior work has explored machine learning to guide this step and achieved encouraging results, yet most methods adopt a per-clause graph analysis paradigm: for each clause they repeatedly build and analyze graphs, incurring heavy overhead and creating a scalability bottleneck. We introduce LeGend, which replaces this paradigm with one-time global representation learning. LeGend pre-trains a domain-adapted self-supervised model to produce latch embeddings that capture global circuit properties. These precomputed embeddings allow a lightweight model to predict high-quality lemmas with negligible overhead, effectively decoupling expensive learning from fast inference. Experiments show LeGend accelerates two state-of-the-art IC3/PDR engines across a diverse set of benchmarks, presenting a promising path to scale up formal verification.