🤖 AI Summary
Visual classifiers often exhibit overconfidence on out-of-distribution (OOD) inputs, yet existing adversarial or inversion-based methods fail to simultaneously achieve high classification confidence and substantial deviation from the training distribution. Method: We propose a novel generative approach to synthesize high-confidence “trustworthy counterfeit samples” by replacing the soft vector condition in network inversion with a one-hot class label condition and incorporating a KL-divergence constraint to jointly regulate confidence and distributional shift. Contribution/Results: Our method reliably generates synthetic OOD samples that are assigned high-confidence predictions by standard classifiers—providing the first empirical evidence of systematic false confidence in mainstream models under OOD conditions. The generated samples constitute a reproducible, interpretable benchmark for evaluating model robustness and calibration beyond the ID setting. Experiments across multiple architectures and datasets validate both the efficacy and generalizability of the approach.
📝 Abstract
In machine learning, especially with vision classifiers, generating inputs that are confidently classified by the model is essential for understanding its decision boundaries and behavior. However, creating such samples that are confidently classified yet distinct from the training data distribution is a challenge. Traditional methods often modify existing inputs, but they don't always ensure confident classification. In this work, we extend network inversion techniques to generate Confidently Classified Counterfeits-synthetic samples that are confidently classified by the model despite being significantly different from the training data. We achieve this by modifying the generator's conditioning mechanism from soft vector conditioning to one-hot vector conditioning and applying Kullback-Leibler divergence (KLD) between the one-hot vectors and the classifier's output distribution. This encourages the generator to produce samples that are both plausible and confidently classified. Generating Confidently Classified Counterfeits is crucial for ensuring the safety and reliability of machine learning systems, particularly in safety-critical applications where models must exhibit confidence only on data within the training distribution. By generating such counterfeits, we challenge the assumption that high-confidence predictions are always indicative of in-distribution data, providing deeper insights into the model's limitations and decision-making process.