🤖 AI Summary
Multi-step theorem proving in automated reasoning often suffers from sharp performance degradation due to limited generalization and increasing inference depth. This work proposes a training-free in-context learning approach that explicitly models topological dependencies in historical solution trajectories via a theorem-priority graph, guiding large language models toward structured reasoning planning. The method leverages a non-parametric structural prior mechanism, integrating retrieval-augmented graph construction, directed-graph encoding of temporal dependencies, and a stepwise symbolic executor to effectively mitigate structural drift and eliminate reliance on supervised training. Evaluated on the FormalGeo7k benchmark, the approach achieves an accuracy of 89.29%, substantially outperforming existing in-context learning baselines and matching the performance of current state-of-the-art supervised models.
📝 Abstract
Multi-step theorem prediction is a central challenge in automated reasoning. Existing neural-symbolic approaches rely heavily on supervised parametric models, which exhibit limited generalization to evolving theorem libraries. In this work, we explore training-free theorem prediction through the lens of in-context learning (ICL). We identify a critical scalability bottleneck, termed Structural Drift: as reasoning depth increases, the performance of vanilla ICL degrades sharply, often collapsing to near zero. We attribute this failure to the LLM's inability to recover latent topological dependencies, leading to unstructured exploration. To address this issue, we propose Theorem Precedence Graphs, which encode temporal dependencies from historical solution traces as directed graphs, and impose explicit topological constraints that effectively prune the search space during inference. Coupled with retrieval-augmented graph construction and a stepwise symbolic executor, our approach enables LLMs to act as structured planners without any gradient-based optimization. Experiments on the FormalGeo7k benchmark show that our method achieves 89.29% accuracy, substantially outperforming ICL baselines and matching state-of-the-art supervised models. These results indicate that explicit structural priors offer a promising direction for scaling LLM-based symbolic reasoning.