🤖 AI Summary
This work proposes LaPha, a novel approach to enhance the planning and search capabilities of large language models (LLMs) in complex mathematical reasoning. LaPha introduces, for the first time, a Poincaré hyperbolic latent space into an AlphaZero-style LLM agent, modeling the search tree as a geodesic structure growing from the origin toward the boundary, thereby leveraging the exponential capacity afforded by negative curvature. The method innovatively defines node potential based on hyperbolic geodesic distance to provide dense reward signals and incorporates a lightweight value head sharing the latent space to enable efficient test-time scaling. Experiments demonstrate that LaPha improves the accuracy of Qwen2.5-Math-1.5B on MATH-500 from 66.0% to 88.2%. Furthermore, LaPha-1.5B and LaPha-7B achieve 56.7% and 60.0% accuracy on AIME’24, respectively, significantly outperforming baseline methods.
📝 Abstract
We propose LaPha, a method for training AlphaZero-like LLM agents in a Poincar\'e latent space. Under LaPha, the search process can be visualized as a tree rooted at the prompt and growing outward from the origin toward the boundary of the Poincar\'e ball, where negative curvature provides exponentially increasing capacity with radius. Using hyperbolic geodesic distance to rule-verified correctness, we define a node potential and assign dense process rewards by potential differences. We further attach a lightweight value head on the same shared latent space, enabling self-guided test-time scaling with almost no additional overhead. On MATH-500, LaPha improves Qwen2.5-Math-1.5B from 66.0% to 88.2%. With value-head-guided search, LaPha-1.5B reaches 56.7% accuracy on AIME'24, and LaPha-7B further achieves 60.0% on AIME'24 and 53.3% on AIME'25.