Learning Representations Through Contrastive Neural Model Checking

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In formal verification, cross-modal alignment learning between logical formulas and system models has long been overlooked. This paper proposes Contrastive Neural Model Checking (CNML), the first approach to leverage model checking itself as a self-supervised signal for learning aligned embeddings of formulas and system behaviors in a shared latent space via contrastive learning. CNML integrates neural-symbolic representation, joint embedding, and intra- and cross-modal contrastive objectives—requiring no human annotation. On industrial-scale retrieval tasks, CNML significantly outperforms classical algorithms and state-of-the-art neural baselines. The learned representations exhibit strong transferability and generalization capability, effectively supporting downstream verification tasks—including reasoning over complex temporal logic formulas.

Technology Category

Application Category

📝 Abstract
Model checking is a key technique for verifying safety-critical systems against formal specifications, where recent applications of deep learning have shown promise. However, while ubiquitous for vision and language domains, representation learning remains underexplored in formal verification. We introduce Contrastive Neural Model Checking (CNML), a novel method that leverages the model checking task as a guiding signal for learning aligned representations. CNML jointly embeds logical specifications and systems into a shared latent space through a self-supervised contrastive objective. On industry-inspired retrieval tasks, CNML considerably outperforms both algorithmic and neural baselines in cross-modal and intra-modal settings.We further show that the learned representations effectively transfer to downstream tasks and generalize to more complex formulas. These findings demonstrate that model checking can serve as an objective for learning representations for formal languages.
Problem

Research questions and friction points this paper is trying to address.

Learning aligned representations for formal specifications and systems
Improving cross-modal and intra-modal retrieval in model checking
Enhancing generalization of representations for complex logical formulas
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages model checking as representation learning signal
Embeds specifications and systems in shared latent space
Uses self-supervised contrastive objective for alignment
🔎 Similar Papers
No similar papers found.