On the Probabilistic Learnability of Compact Neural Network Preimage Bounds

πŸ“… 2025-11-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Computing exact forward images of neural networks is #P-hard, rendering existing formal verification methods intractable for large-scale networks. To address this, we propose RF-ProVeβ€”a novel framework that reframes forward image approximation as a probabilistic learning problem. RF-ProVe is the first method to integrate random forests, active sampling, and bootstrap randomization for statistically grounded forward image estimation. It provides rigorous statistical guarantees on both input-region purity and global coverage, enabling high-confidence, bounded-error forward image approximations. By synergistically combining ensemble random decision trees, probabilistic approximation, and formal verification, RF-ProVe effectively captures structural patterns in high-dimensional input spaces that satisfy given output constraints. Experimental results demonstrate that RF-ProVe generates compact, formally verifiable forward image approximations even on networks where exact solvers fail, significantly improving both efficiency and scalability of neural network verification.

Technology Category

Application Category

πŸ“ Abstract
Although recent provable methods have been developed to compute preimage bounds for neural networks, their scalability is fundamentally limited by the #P-hardness of the problem. In this work, we adopt a novel probabilistic perspective, aiming to deliver solutions with high-confidence guarantees and bounded error. To this end, we investigate the potential of bootstrap-based and randomized approaches that are capable of capturing complex patterns in high-dimensional spaces, including input regions where a given output property holds. In detail, we introduce $ extbf{R}$andom $ extbf{F}$orest $ extbf{Pro}$perty $ extbf{Ve}$rifier ($ exttt{RF-ProVe}$), a method that exploits an ensemble of randomized decision trees to generate candidate input regions satisfying a desired output property and refines them through active resampling. Our theoretical derivations offer formal statistical guarantees on region purity and global coverage, providing a practical, scalable solution for computing compact preimage approximations in cases where exact solvers fail to scale.
Problem

Research questions and friction points this paper is trying to address.

Computing scalable neural network preimage bounds
Providing probabilistic guarantees for output property verification
Approximating compact preimages when exact methods fail
Innovation

Methods, ideas, or system contributions that make the work stand out.

Random Forest Property Verifier for preimage bounds
Probabilistic bootstrap-based approach with statistical guarantees
Active resampling refines candidate input regions
πŸ”Ž Similar Papers
No similar papers found.