🤖 AI Summary
Weak-to-strong transfer—migrating knowledge from weakly supervised models to strong pre-trained models—suffers from high computational overhead and reliance on auxiliary weak models. Method: We propose an information-theoretic loss framework based on *f*-divergence, which reformulates the transfer objective via loss-function reconstruction, eliminating the need for multi-model collaboration or complex distillation pipelines. Contribution/Results: Our work is the first to systematically characterize the theoretical limitations and equivalence of *f*-divergences in weak-to-strong generalization, derive sample complexity bounds, and prove their unified improvement in both noise robustness and generalization performance. Extensive experiments across multiple benchmarks demonstrate that our approach achieves superior generalization under noisy labels—without requiring any auxiliary weak model—while significantly reducing memory footprint and computational cost.
📝 Abstract
Weak-to-strong generalization (W2SG) has emerged as a promising paradigm for stimulating the capabilities of strong pre-trained models by leveraging supervision from weaker supervisors. To improve the performance of the strong model, existing methods often require additional weak models or complex procedures, leading to substantial computational and memory overhead. Motivated by the effectiveness of $f$-divergence loss in various machine learning domains, we introduce $f$-divergence as an information-theoretic loss function framework in W2SG. Our theoretical analysis reveals fundamental limitations and equivalence of different $f$-divergence losses in W2SG, supported by sample complexity bounds and information-theoretic insights. We empirically demonstrate that $f$-divergence loss, which generalizes widely-used metrics like KL divergence, effectively improves generalization and noise tolerance of the strong model in practice.