Tiny Recursive Control: Iterative Reasoning for Efficient Optimal Control

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high power consumption and latency incurred by large-parameter neural controllers in embedded aerospace systems, this paper proposes a novel optimal control paradigm that recursively deepens rather than widens the network. The core innovation is the first integration of recursive inference into continuous optimal control synthesis: a compact, fixed-size neural network (1.5M parameters) is reused across multiple inference rounds, augmented by dual-layer latent-space modeling, trajectory simulation, and tracking-error-driven iterative refinement—enabling emergent representational capacity. Computation scales with iteration count, while memory footprint remains constant (<10 MB). Evaluated on oscillator stabilization and fuel-constrained powered descent tasks, the method achieves near-optimal control performance with millisecond-scale GPU inference latency. Compared to large language model–based baselines, it reduces both parameter count and memory usage by two orders of magnitude.

Technology Category

Application Category

📝 Abstract
Neural network controllers increasingly demand millions of parameters, and language model approaches push into the billions. For embedded aerospace systems with strict power and latency constraints, this scaling is prohibitive. We present Tiny Recursive Control (TRC), a neural architecture based on a counterintuitive principle: capacity can emerge from iteration depth rather than parameter count. TRC applies compact networks (approximately 1.5M parameters) repeatedly through a two-level hierarchical latent structure, refining control sequences by simulating trajectories and correcting based on tracking error. Because the same weights process every refinement step, adding iterations increases computation without increasing memory. We evaluate TRC on nonlinear control problems including oscillator stabilization and powered descent with fuel constraints. Across these domains, TRC achieves near-optimal control costs while requiring only millisecond-scale inference on GPU and under 10~MB memory, two orders of magnitude smaller than language model baselines. These results demonstrate that recursive reasoning, previously confined to discrete tasks, transfers effectively to continuous control synthesis.
Problem

Research questions and friction points this paper is trying to address.

Addresses prohibitive scaling of large neural controllers for embedded aerospace systems.
Develops a compact neural architecture using iteration depth instead of parameters.
Enables near-optimal control with minimal memory and millisecond inference times.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recursive neural architecture with iterative refinement
Hierarchical latent structure for trajectory simulation
Compact network reused across refinement steps
🔎 Similar Papers
No similar papers found.