🤖 AI Summary
This work addresses the dual challenge of robustness degradation and increased false positives on benign samples (i.e., performance regression) when updating Windows malware detectors under adversarial EXE attacks—formalizing and jointly optimizing this previously unaddressed update trade-off. We propose a dynamic update mechanism integrating gradient-aware adversarial sample filtering with an incremental retraining framework to concurrently enhance robustness and generalization. Our approach synergistically combines PE-specific feature engineering, joint optimization of XGBoost and deep models, adversarial training, and gradient-sensitive sampling. Evaluated on the RealWorld-EXE dataset, our method achieves a 12.7% improvement in adversarial accuracy and reduces the benign false positive rate to 0.3%, significantly outperforming existing detector update strategies.