Feature-Guided Analysis of Neural Networks: A Replication Study

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural network decision interpretability is critically needed in safety-critical applications, yet existing methods lack rigorous validation under industrial conditions. Method: This paper systematically evaluates Feature-Guided Analysis (FGA) for industrial applicability, conducting empirical assessments on MNIST and the Label-Specific Classification (LSC) benchmark. Using neuron activation monitoring and rule extraction, we analyze FGA’s robustness across diverse neural architectures, training strategies, and feature selection schemes. Contribution/Results: We find that model architecture significantly affects recall but has limited impact on precision. FGA achieves superior precision (+3.2% over state-of-the-art) and cross-dataset stability on both benchmarks. Crucially, it demonstrates consistent, trustworthy explanatory capability under heterogeneous industrial conditions—marking the first such validation. This work provides essential empirical evidence bridging the gap between FGA’s theoretical promise and real-world deployment.

Technology Category

Application Category

📝 Abstract
Understanding why neural networks make certain decisions is pivotal for their use in safety-critical applications. Feature-Guided Analysis (FGA) extracts slices of neural networks relevant to their tasks. Existing feature-guided approaches typically monitor the activation of the neural network neurons to extract the relevant rules. Preliminary results are encouraging and demonstrate the feasibility of this solution by assessing the precision and recall of Feature-Guided Analysis on two pilot case studies. However, the applicability in industrial contexts needs additional empirical evidence. To mitigate this need, this paper assesses the applicability of FGA on a benchmark made by the MNIST and LSC datasets. We assessed the effectiveness of FGA in computing rules that explain the behavior of the neural network. Our results show that FGA has a higher precision on our benchmark than the results from the literature. We also evaluated how the selection of the neural network architecture, training, and feature selection affect the effectiveness of FGA. Our results show that the selection significantly affects the recall of FGA, while it has a negligible impact on its precision.
Problem

Research questions and friction points this paper is trying to address.

Assessing applicability of Feature-Guided Analysis on MNIST and LSC benchmark datasets
Evaluating effectiveness of FGA in computing rules explaining neural network behavior
Analyzing how neural network architecture and feature selection affect FGA performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

FGA extracts task-relevant neural network slices
FGA monitors neuron activations for rule extraction
FGA evaluates precision and recall on benchmarks
🔎 Similar Papers
No similar papers found.