Does the Data Processing Inequality Reflect Practice? On the Utility of Low-Level Tasks

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The data processing inequality (DPI) suggests that preprocessing cannot increase mutual information between features and labels, yet empirical classification accuracy often improves with preprocessing—creating an apparent paradox in finite-sample settings. Method: We systematically investigate how low-level preprocessing (e.g., denoising, encoding) enhances classification accuracy. Theoretically, we prove that for any finite training set, there exists a preprocessing transformation strictly improving classification accuracy; we further quantify how inter-class separability, sample size, and class balance modulate this gain. Experimentally, we evaluate classical denoising methods coupled with deep classifiers on controlled-noise benchmark datasets. Contribution/Results: Our analysis challenges the universal applicability of DPI in practical learning, establishing a new theoretical foundation for signal preprocessing. Experiments confirm that preprocessing yields significant accuracy gains under small-sample, high-noise, or imbalanced regimes—aligning quantitatively with our theoretical predictions. This work bridges information-theoretic principles with empirical learning behavior, offering both theoretical insight and practical guidance for preprocessing design.

Technology Category

Application Category

📝 Abstract
The data processing inequality is an information-theoretic principle stating that the information content of a signal cannot be increased by processing the observations. In particular, it suggests that there is no benefit in enhancing the signal or encoding it before addressing a classification problem. This assertion can be proven to be true for the case of the optimal Bayes classifier. However, in practice, it is common to perform "low-level" tasks before "high-level" downstream tasks despite the overwhelming capabilities of modern deep neural networks. In this paper, we aim to understand when and why low-level processing can be beneficial for classification. We present a comprehensive theoretical study of a binary classification setup, where we consider a classifier that is tightly connected to the optimal Bayes classifier and converges to it as the number of training samples increases. We prove that for any finite number of training samples, there exists a pre-classification processing that improves the classification accuracy. We also explore the effect of class separation, training set size, and class balance on the relative gain from this procedure. We support our theory with an empirical investigation of the theoretical setup. Finally, we conduct an empirical study where we investigate the effect of denoising and encoding on the performance of practical deep classifiers on benchmark datasets. Specifically, we vary the size and class distribution of the training set, and the noise level, and demonstrate trends that are consistent with our theoretical results.
Problem

Research questions and friction points this paper is trying to address.

Investigates benefits of low-level preprocessing for classification tasks
Explores when preprocessing improves accuracy despite data processing inequality
Examines impact of training size and class balance on preprocessing gains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-classification processing improves finite-sample accuracy
Theoretical analysis explores class separation and training size effects
Empirical study validates denoising and encoding benefits in practice
🔎 Similar Papers
No similar papers found.