Smaller is Better: Enhancing Transparency in Vehicle AI Systems via Pruning

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The transparency and safety of vehicular AI systems rely on high-fidelity post-hoc explanations; however, existing methods often suffer from low explanation fidelity and inconsistency, undermining trust. Method: This work systematically investigates the impact of model pruning on the post-hoc interpretability of traffic sign classifiers, comparing three paradigms—natural training, adversarial training, and pruning—and quantifying explanation fidelity and comprehensibility via saliency maps. Contribution/Results: We find that pruning not only induces structural sparsity and improves deployment efficiency but also significantly enhances alignment between explanations and the model’s true decision pathways—challenging the conventional assumption that compression inherently degrades interpretability. To our knowledge, this is the first study to demonstrate that structured sparsification can jointly optimize both efficiency and interpretability in deep learning models. Our findings establish a novel design paradigm for resource-constrained automotive AI systems that simultaneously ensures safety, transparency, and lightweight operation.

Technology Category

Application Category

📝 Abstract
Connected and autonomous vehicles continue to heavily rely on AI systems, where transparency and security are critical for trust and operational safety. Post-hoc explanations provide transparency to these black-box like AI models but the quality and reliability of these explanations is often questioned due to inconsistencies and lack of faithfulness in representing model decisions. This paper systematically examines the impact of three widely used training approaches, namely natural training, adversarial training, and pruning, affect the quality of post-hoc explanations for traffic sign classifiers. Through extensive empirical evaluation, we demonstrate that pruning significantly enhances the comprehensibility and faithfulness of explanations (using saliency maps). Our findings reveal that pruning not only improves model efficiency but also enforces sparsity in learned representation, leading to more interpretable and reliable decisions. Additionally, these insights suggest that pruning is a promising strategy for developing transparent deep learning models, especially in resource-constrained vehicular AI systems.
Problem

Research questions and friction points this paper is trying to address.

Enhancing transparency and security in vehicle AI systems through pruning
Improving quality and reliability of post-hoc explanations for traffic classifiers
Developing interpretable deep learning models for resource-constrained vehicular systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pruning enhances explanation faithfulness and comprehensibility
Pruning enforces sparsity for interpretable decisions
Pruning improves model efficiency and transparency