Concolic Testing on Individual Fairness of Neural Network Models

πŸ“… 2025-09-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the challenge of formally verifying individual fairness in deep neural networks (DNNs). We propose PyFairβ€”the first open-source framework enabling formal, proof-based individual fairness verification for pre-trained DNNs. Methodologically, PyFair integrates concolic testing, symbolic execution, and formal verification; it introduces a novel dual-network architecture to ensure path-exploration completeness and generates fairness-guided path constraints compatible with diverse bias-mitigated model structures. Our contributions are threefold: (1) the first framework to provide provably correct individual fairness verification for pre-trained DNNs; (2) successful detection of discriminatory instances and formal fairness verification on 25 benchmark models; and (3) identification of critical scalability bottlenecks in formal verification under complex model architectures. The implementation is publicly available.

Technology Category

Application Category

πŸ“ Abstract
This paper introduces PyFair, a formal framework for evaluating and verifying individual fairness of Deep Neural Networks (DNNs). By adapting the concolic testing tool PyCT, we generate fairness-specific path constraints to systematically explore DNN behaviors. Our key innovation is a dual network architecture that enables comprehensive fairness assessments and provides completeness guarantees for certain network types. We evaluate PyFair on 25 benchmark models, including those enhanced by existing bias mitigation techniques. Results demonstrate PyFair's efficacy in detecting discriminatory instances and verifying fairness, while also revealing scalability challenges for complex models. This work advances algorithmic fairness in critical domains by offering a rigorous, systematic method for fairness testing and verification of pre-trained DNNs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating individual fairness of neural networks
Generating fairness-specific path constraints systematically
Providing completeness guarantees for fairness assessments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concolic testing for fairness-specific path constraints
Dual network architecture for comprehensive fairness assessments
Systematic fairness testing with completeness guarantees
πŸ”Ž Similar Papers
M
Ming-I Huang
National ChengChi University, Taipei, Taiwan
C
Chih-Duo Hong
National ChengChi University, Taipei, Taiwan
Fang Yu
Fang Yu
Associate Professor, Dept. Management Information Systems, National Chengchi University
Software VerificationString AnalysisAutomata TheoryWeb Security