🤖 AI Summary
Existing token-level adaptive computation methods lack verifiable evaluation of how well computational allocation aligns with task complexity, particularly in natural language tasks where token difficulty is unobservable and confounded by architectural biases. This work proposes ANIRA, a unified recurrent Transformer framework enabling per-token variable-depth computation, and introduces a synthetic language task parameterized by difficulty to serve as a controllable evaluation paradigm. By decoupling computation allocation from other model factors, the study presents the first systematic analysis—under unsupervised settings—of alignment between computation and complexity, generalization capability, and decision timing. The findings reveal that alignment can emerge without explicit difficulty supervision, yet such alignment fails to support algorithmic extrapolation; early decisions rely on static structural cues, whereas online halting mechanisms better reflect the dynamic state of algorithmic execution.
📝 Abstract
Token-level adaptive computation seeks to reduce inference cost by allocating more computation to harder tokens and less to easier ones. However, prior work is primarily evaluated on natural-language benchmarks using task-level metrics, where token-level difficulty is unobservable and confounded with architectural factors, making it unclear whether compute allocation truly aligns with underlying complexity. We address this gap through three contributions. First, we introduce a complexity-controlled evaluation paradigm using algorithmic and synthetic language tasks with parameterized difficulty, enabling direct testing of token-level compute allocation. Second, we propose ANIRA, a unified recurrent Transformer framework that supports per-token variable-depth computation while isolating compute allocation decisions from other model factors. Third, we use this framework to conduct a systematic analysis of token-level adaptive computation across alignment with complexity, generalization, and decision timing. Our results show that compute allocation aligned with task complexity can emerge without explicit difficulty supervision, but such alignment does not imply algorithmic generalization: models fail to extrapolate to unseen input sizes despite allocating additional computation. We further find that early compute decisions rely on static structural cues, whereas online halting more closely tracks algorithmic execution state.