🤖 AI Summary
To address the mismatch between static evaluation metrics and dynamic learning processes in feature selection, as well as the absence of quantitative stability criteria, this paper proposes the first temporal-aware dynamic evaluation framework. The framework models gradient response trajectories and stability decay patterns to enable real-time quantification of feature importance contributions. It integrates backward-propagated gradient flow analysis, sliding-window importance aggregation, feature-level stability regularization, and online normalized scoring—ensuring tight alignment between evaluation metrics and model optimization dynamics. Evaluated on 12 benchmark datasets, the method significantly enhances discriminative power of selected feature subsets (average AUC improvement of +3.2%), reduces false selection of redundant features by 41%, and supports interpretability-driven adaptive pruning.