🤖 AI Summary
This work addresses the excessive memory and computational overhead of group-algebraic matrix decomposition methods for continual counting under differential privacy in streaming settings. We propose a memory-efficient decomposition framework. Our core contribution is the first identification of structural properties inherent to group-algebraic decompositions, combined with a binning strategy, which reduces both per-update memory consumption and runtime to Õ(√n). The method strictly satisfies continual observation differential privacy while significantly mitigating error accumulation; its theoretical accuracy approaches the information-theoretic lower bound. Experiments demonstrate that our approach achieves low latency, minimal memory footprint, and high statistical utility on large-scale streaming counting tasks. It thus provides a scalable, engineering-ready foundation for privacy-preserving learning systems.
📝 Abstract
We study memory-efficient matrix factorization for differentially private counting under continual observation. While recent work by Henzinger and Upadhyay 2024 introduced a factorization method with reduced error based on group algebra, its practicality in streaming settings remains limited by computational constraints. We present new structural properties of the group algebra factorization, enabling the use of a binning technique from Andersson and Pagh (2024). By grouping similar values in rows, the binning method reduces memory usage and running time to $ ilde O(sqrt{n})$, where $n$ is the length of the input stream, while maintaining a low error. Our work bridges the gap between theoretical improvements in factorization accuracy and practical efficiency in large-scale private learning systems.