How fast are algorithms reducing the demands on memory? A survey of progress in space complexity

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the evolutionary trajectory of algorithmic space complexity across 118 core problems in computer science, encompassing over 800 algorithms. Method: Leveraging a large-scale literature survey, historical complexity data analysis, and theoretical evaluation, the study quantifies trends in memory efficiency improvements relative to hardware advances. Contribution/Results: It is the first to empirically demonstrate that, in 20% of cases, algorithmic space optimization outpaces DRAM latency reduction. The paper introduces the “time–space trade-off Pareto frontier” framework to characterize optimal algorithmic trade-offs over time. Findings confirm that memory efficiency has emerged as a critical constraint in modern algorithm design. To support reproducible research and engineering practice, the authors release an open-source algorithm knowledge base (https://algorithm-wiki.csail.mit.edu), providing standardized benchmarks and decision-support tools for both theoretical analysis and system implementation.

Technology Category

Application Category

📝 Abstract
Algorithm research focuses primarily on how many operations processors need to do (time complexity). But for many problems, both the runtime and energy used are dominated by memory accesses. In this paper, we present the first broad survey of how algorithmic progress has improved memory usage (space complexity). We analyze 118 of the most important algorithm problems in computer science, reviewing the 800+ algorithms used to solve them. Our results show that space complexity has become much more important in recent years as worries have arisen about memory access bottle-necking performance (the ``memory wall''). In 20% of cases we find that space complexity improvements for large problems (n=1 billion) outpaced improvements in DRAM access speed, suggesting that for these problems algorithmic progress played a larger role than hardware progress in minimizing memory access delays. Increasingly, we also see the emergence of algorithmic Pareto frontiers, where getting better asymptotic time complexity for a problem requires getting worse asymptotic space complexity, and vice-versa. This tension implies that programmers will increasingly need to consider multiple algorithmic options to understand which is best for their particular problem. To help theorists and practitioners alike consider these trade-offs, we have created a reference for them at https://algorithm-wiki.csail.mit.edu.
Problem

Research questions and friction points this paper is trying to address.

Surveying algorithmic progress in reducing memory usage demands
Analyzing space complexity improvements versus hardware speed advancements
Exploring trade-offs between time and space complexity in algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surveyed 118 algorithm problems for space complexity
Analyzed 800+ algorithms to assess memory usage improvements
Created online reference for algorithmic time-space trade-offs
🔎 Similar Papers
No similar papers found.