Feature-Aware Test Generation for Deep Learning Models

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Detect, a novel framework for controllable testing of deep learning models through disentangled semantic attributes. Addressing the limitation of existing testing methods—which lack insight into semantic causes and fine-grained control—Detect enables targeted perturbations of semantic features in the latent space. By leveraging vision-language models for semantic attribution, it distinguishes task-relevant from task-irrelevant features, thereby precisely identifying shortcut learning behaviors and robustness vulnerabilities. Experiments on image classification and object detection demonstrate that Detect significantly outperforms current approaches: it efficiently uncovers decision boundaries and reveals architectural biases, such as convolutional networks’ tendency to overfit to local cues and Transformers’ reliance on global context.

Technology Category

Application Category

📝 Abstract
As deep learning models are widely used in software systems, test generation plays a crucial role in assessing the quality of such models before deployment. To date, the most advanced test generators rely on generative AI to synthesize inputs; however, these approaches remain limited in providing semantic insight into the causes of misbehaviours and in offering fine-grained semantic controllability over the generated inputs. In this paper, we introduce Detect, a feature-aware test generation framework for vision-based deep learning (DL) models that systematically generates inputs by perturbing disentangled semantic attributes within the latent space. Detect perturbs individual latent features in a controlled way and observes how these changes affect the model's output. Through this process, it identifies which features lead to behavior shifts and uses a vision-language model for semantic attribution. By distinguishing between task-relevant and irrelevant features, Detect applies feature-aware perturbations targeted for both generalization and robustness. Empirical results across image classification and detection tasks show that Detect generates high-quality test cases with fine-grained control, reveals distinct shortcut behaviors across model architectures (convolutional and transformer-based), and bugs that are not captured by accuracy metrics. Specifically, Detect outperforms a state-of-the-art test generator in decision boundary discovery and a leading spurious feature localization method in identifying robustness failures. Our findings show that fully fine-tuned convolutional models are prone to overfitting on localized cues, such as co-occurring visual traits, while weakly supervised transformers tend to rely on global features, such as environmental variances. These findings highlight the value of interpretable and feature-aware testing in improving DL model reliability.
Problem

Research questions and friction points this paper is trying to address.

test generation
deep learning
semantic controllability
model reliability
feature awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

feature-aware testing
latent space perturbation
semantic attribution
disentangled representation
deep learning robustness
🔎 Similar Papers
No similar papers found.