🤖 AI Summary
This work addresses the insufficient adversarial robustness of Neural Probabilistic Circuits (NPCs) under black-box attacks by proposing RNPC, a provably robust NPC framework. Theoretically, we prove that the adversarial robustness of NPCs is solely determined by their attribute identification module; leveraging this insight, we design a class-level output fusion mechanism and an ensemble inference strategy to jointly enhance robustness in both attribute prediction and final classification. RNPC integrates adversarial training with formal verification, achieving significant improvements in robust accuracy over existing concept-bottleneck models while preserving original clean-data accuracy. Key contributions include: (1) the first theoretical criterion for assessing NPC robustness; (2) the first NPC architecture with formal, certificate-based robustness guarantees; and (3) unified enhancement of interpretability and robustness via class-level ensemble learning.
📝 Abstract
Neural Probabilistic Circuits (NPCs), a new class of concept bottleneck models, comprise an attribute recognition model and a probabilistic circuit for reasoning. By integrating the outputs from these two modules, NPCs produce compositional and interpretable predictions. While offering enhanced interpretability and high performance on downstream tasks, the neural-network-based attribute recognition model remains a black box. This vulnerability allows adversarial attacks to manipulate attribute predictions by introducing carefully crafted subtle perturbations to input images, potentially compromising the final predictions. In this paper, we theoretically analyze the adversarial robustness of NPC and demonstrate that it only depends on the robustness of the attribute recognition model and is independent of the robustness of the probabilistic circuit. Moreover, we propose RNPC, the first robust neural probabilistic circuit against adversarial attacks on the recognition module. RNPC introduces a novel class-wise integration for inference, ensuring a robust combination of outputs from the two modules. Our theoretical analysis demonstrates that RNPC exhibits provably improved adversarial robustness compared to NPC. Empirical results on image classification tasks show that RNPC achieves superior adversarial robustness compared to existing concept bottleneck models while maintaining high accuracy on benign inputs.