🤖 AI Summary
This work proposes TopoPrune, a novel data pruning framework that addresses the instability of existing geometry-based methods under cross-architecture transfer or feature noise by leveraging topological inductive bias. TopoPrune uniquely integrates global low-dimensional manifold embedding with local differentiable persistent homology to enable dual-scale topological modeling: it first constructs a global manifold to capture the intrinsic data structure and then applies differentiable persistent homology to locally optimize samples and rank them by topological complexity. By exploiting the inherent stability of topological representations, TopoPrune maintains high accuracy even at aggressive pruning rates up to 90%, significantly outperforming current approaches while demonstrating exceptional robustness to noise and strong cross-architecture transferability.
📝 Abstract
Geometric data pruning methods, while practical for leveraging pretrained models, are fundamentally unstable. Their reliance on extrinsic geometry renders them highly sensitive to latent space perturbations, causing performance to degrade during cross-architecture transfer or in the presence of feature noise. We introduce TopoPrune, a framework which resolves this challenge by leveraging topology to capture the stable, intrinsic structure of data. TopoPrune operates at two scales, (1) utilizing a topology-aware manifold approximation to establish a global low-dimensional embedding of the dataset. Subsequently, (2) it employs differentiable persistent homology to perform a local topological optimization on the manifold embeddings, ranking samples by their structural complexity. We demonstrate that our unified dual-scale topological approach ensures high accuracy and precision, particularly at significant dataset pruning rates (e.g., 90%). Furthermore, through the inherent stability properties of topology, TopoPrune is (a) exceptionally robust to noise perturbations of latent feature embeddings and (b) demonstrates superior transferability across diverse network architectures. This study demonstrates a promising avenue towards stable and principled topology-based frameworks for robust data-efficient learning.