Latent Poincar\'e Shaping for Agentic Reinforcement Learning

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes LaPha, a novel approach to enhance the planning and search capabilities of large language models (LLMs) in complex mathematical reasoning. LaPha introduces, for the first time, a Poincaré hyperbolic latent space into an AlphaZero-style LLM agent, modeling the search tree as a geodesic structure growing from the origin toward the boundary, thereby leveraging the exponential capacity afforded by negative curvature. The method innovatively defines node potential based on hyperbolic geodesic distance to provide dense reward signals and incorporates a lightweight value head sharing the latent space to enable efficient test-time scaling. Experiments demonstrate that LaPha improves the accuracy of Qwen2.5-Math-1.5B on MATH-500 from 66.0% to 88.2%. Furthermore, LaPha-1.5B and LaPha-7B achieve 56.7% and 60.0% accuracy on AIME’24, respectively, significantly outperforming baseline methods.

Technology Category

Application Category

📝 Abstract
We propose LaPha, a method for training AlphaZero-like LLM agents in a Poincar\'e latent space. Under LaPha, the search process can be visualized as a tree rooted at the prompt and growing outward from the origin toward the boundary of the Poincar\'e ball, where negative curvature provides exponentially increasing capacity with radius. Using hyperbolic geodesic distance to rule-verified correctness, we define a node potential and assign dense process rewards by potential differences. We further attach a lightweight value head on the same shared latent space, enabling self-guided test-time scaling with almost no additional overhead. On MATH-500, LaPha improves Qwen2.5-Math-1.5B from 66.0% to 88.2%. With value-head-guided search, LaPha-1.5B reaches 56.7% accuracy on AIME'24, and LaPha-7B further achieves 60.0% on AIME'24 and 53.3% on AIME'25.
Problem

Research questions and friction points this paper is trying to address.

Agentic Reinforcement Learning
Latent Space
Poincaré Embedding
Mathematical Reasoning
Process Rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Poincaré latent space
hyperbolic geometry
process rewards
value head
agentic reinforcement learning
🔎 Similar Papers
No similar papers found.
H
Hanchen Xia
Shanghai Academy of AI for Science
B
Baoyou Chen
Shanghai Academy of AI for Science
Zelin Zang
Zelin Zang
Hong Kong Institute of Science & Innovation, Chinese Academy of Sciences
Deep Learning
Y
Yutang Ge
School of Mathematical Sciences, Shanghai Jiao Tong University
Guojiang Zhao
Guojiang Zhao
DP Technology; Carnegie Mellon University;
LLMAI For Science
Siyu Zhu
Siyu Zhu
LinkedIn
LLM | Ranking