Hierarchical Feature-level Reverse Propagation for Post-Training Neural Networks

📅 2025-06-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak interpretability, safety assurance challenges, and high training coupling inherent in end-to-end autonomous driving models, this paper proposes a hierarchical decoupling post-training optimization framework. It reconstructs intermediate feature maps by back-propagating from output labels—formulating feature inversion as a well-posed linear system (either a linear equation system or least-squares problem), thereby achieving gradient-free, feature-level inverse supervision for the first time. This enables generation of surrogate supervision signals that support independent, transparent, and interpretable optimization of individual network modules. The method overcomes the parameter coupling limitations of conventional backpropagation. Evaluated on multiple image classification benchmarks, it significantly improves generalization performance and inference efficiency, while maintaining theoretical rigor and engineering practicality.

Technology Category

Application Category

📝 Abstract
End-to-end autonomous driving has emerged as a dominant paradigm, yet its highly entangled black-box models pose significant challenges in terms of interpretability and safety assurance. To improve model transparency and training flexibility, this paper proposes a hierarchical and decoupled post-training framework tailored for pretrained neural networks. By reconstructing intermediate feature maps from ground-truth labels, surrogate supervisory signals are introduced at transitional layers to enable independent training of specific components, thereby avoiding the complexity and coupling of conventional end-to-end backpropagation and providing interpretable insights into networks' internal mechanisms. To the best of our knowledge, this is the first method to formalize feature-level reverse computation as well-posed optimization problems, which we rigorously reformulate as systems of linear equations or least squares problems. This establishes a novel and efficient training paradigm that extends gradient backpropagation to feature backpropagation. Extensive experiments on multiple standard image classification benchmarks demonstrate that the proposed method achieves superior generalization performance and computational efficiency compared to traditional training approaches, validating its effectiveness and potential.
Problem

Research questions and friction points this paper is trying to address.

Improving interpretability and safety in autonomous driving models
Enabling independent training of neural network components
Formalizing feature-level reverse computation as optimization problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical feature-level reverse propagation
Surrogate supervisory signals for training
Feature backpropagation as optimization problems
🔎 Similar Papers
No similar papers found.
Ni Ding
Ni Ding
University of Auckland
Information TheoryInformation ScienceSignal ProcessingPrivacyDiscrete Optimization
L
Lei He
School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China; State Key Laboratory of Intelligent Green Vehicle and Mobility, Tsinghua University, Beijing 100084, China
S
Shengbo Eben Li
School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China; State Key Laboratory of Intelligent Green Vehicle and Mobility, Tsinghua University, Beijing 100084, China
Keqiang Li
Keqiang Li
Department of Automotive Engineering, Tsinghua University
Intelligent VehiclesAdvanced Driver Assistant Systems