🤖 AI Summary
This work proposes the Recursive Inference Machines (RIMs) framework to enhance the reasoning capabilities of neural systems on complex problems by explicitly integrating classical recursive inference mechanisms into neural architectures for the first time. RIMs operate through a reweighted inference component that collaborates with a neural backbone, thereby increasing reasoning depth and flexibility while preserving end-to-end trainability. The framework achieves state-of-the-art performance on challenging reasoning benchmarks, including ARC-AGI-1, ARC-AGI-2, and Sudoku Extreme, significantly outperforming existing methods. Furthermore, RIMs demonstrate superior performance over TabPFNs on tabular data classification tasks, underscoring their generality and effectiveness across diverse problem domains.
📝 Abstract
Neural reasoners such as Tiny Recursive Models (TRMs) solve complex problems by combining neural backbones with specialized inference schemes. Such inference schemes have been a central component of stochastic reasoning systems, where inference rules are applied to a stochastic model to derive answers to complex queries. In this work, we bridge these two paradigms by introducing Recursive Inference Machines (RIMs), a neural reasoning framework that explicitly incorporates recursive inference mechanisms inspired by classical inference engines. We show that TRMs can be expressed as an instance of RIMs, allowing us to extend them through a reweighting component, yielding better performance on challenging reasoning benchmarks, including ARC-AGI-1, ARC-AGI-2, and Sudoku Extreme. Furthermore, we show that RIMs can be used to improve reasoning on other tasks, such as the classification of tabular data, outperforming TabPFNs.