🤖 AI Summary
In formal verification, cross-modal alignment learning between logical formulas and system models has long been overlooked. This paper proposes Contrastive Neural Model Checking (CNML), the first approach to leverage model checking itself as a self-supervised signal for learning aligned embeddings of formulas and system behaviors in a shared latent space via contrastive learning. CNML integrates neural-symbolic representation, joint embedding, and intra- and cross-modal contrastive objectives—requiring no human annotation. On industrial-scale retrieval tasks, CNML significantly outperforms classical algorithms and state-of-the-art neural baselines. The learned representations exhibit strong transferability and generalization capability, effectively supporting downstream verification tasks—including reasoning over complex temporal logic formulas.
📝 Abstract
Model checking is a key technique for verifying safety-critical systems against formal specifications, where recent applications of deep learning have shown promise. However, while ubiquitous for vision and language domains, representation learning remains underexplored in formal verification. We introduce Contrastive Neural Model Checking (CNML), a novel method that leverages the model checking task as a guiding signal for learning aligned representations. CNML jointly embeds logical specifications and systems into a shared latent space through a self-supervised contrastive objective. On industry-inspired retrieval tasks, CNML considerably outperforms both algorithmic and neural baselines in cross-modal and intra-modal settings.We further show that the learned representations effectively transfer to downstream tasks and generalize to more complex formulas. These findings demonstrate that model checking can serve as an objective for learning representations for formal languages.