π€ AI Summary
This paper addresses the challenge of formally verifying individual fairness in deep neural networks (DNNs). We propose PyFairβthe first open-source framework enabling formal, proof-based individual fairness verification for pre-trained DNNs. Methodologically, PyFair integrates concolic testing, symbolic execution, and formal verification; it introduces a novel dual-network architecture to ensure path-exploration completeness and generates fairness-guided path constraints compatible with diverse bias-mitigated model structures. Our contributions are threefold: (1) the first framework to provide provably correct individual fairness verification for pre-trained DNNs; (2) successful detection of discriminatory instances and formal fairness verification on 25 benchmark models; and (3) identification of critical scalability bottlenecks in formal verification under complex model architectures. The implementation is publicly available.
π Abstract
This paper introduces PyFair, a formal framework for evaluating and verifying individual fairness of Deep Neural Networks (DNNs). By adapting the concolic testing tool PyCT, we generate fairness-specific path constraints to systematically explore DNN behaviors. Our key innovation is a dual network architecture that enables comprehensive fairness assessments and provides completeness guarantees for certain network types. We evaluate PyFair on 25 benchmark models, including those enhanced by existing bias mitigation techniques. Results demonstrate PyFair's efficacy in detecting discriminatory instances and verifying fairness, while also revealing scalability challenges for complex models. This work advances algorithmic fairness in critical domains by offering a rigorous, systematic method for fairness testing and verification of pre-trained DNNs.