🤖 AI Summary
Conventional model compression techniques often incur significant accuracy degradation. Method: This work proposes and systematically validates perforated backpropagation—a biologically inspired technique motivated by dendritic computation in neurons—that introduces structured sparsity into the backward pass via gradient sparsification, eliminating reliance on the prune-and-fine-tune paradigm. Contribution/Results: Conducted through a distributed experimental framework co-led by Pittsburgh-based ML practitioners and students, this study delivers the first large-scale empirical evaluation across diverse real-world models and datasets. Results demonstrate up to 90% parameter compression with zero accuracy loss, or up to 16% accuracy gain without increasing parameter count—substantially outperforming existing biologically inspired optimization methods. The approach exhibits strong robustness, scalability, and practical utility across realistic tasks, establishing perforated backpropagation as a novel paradigm for efficient neural computation.
📝 Abstract
Perforated Backpropagation is a neural network optimization technique based on modern understanding of the computational importance of dendrites within biological neurons. This paper explores further experiments from the original publication, generated from a hackathon held at the Carnegie Mellon Swartz Center in February 2025. Students and local Pittsburgh ML practitioners were brought together to experiment with the Perforated Backpropagation algorithm on the datasets and models which they were using for their projects. Results showed that the system could enhance their projects, with up to 90% model compression without negative impact on accuracy, or up to 16% increased accuracy of their original models.