🤖 AI Summary
Symbolic reasoning in fundamental physics is hindered by data scarcity and the absence of domain-informed priors. To address this, we propose Learning at Criticality (LaC), the first framework to integrate statistical physics phase-transition theory into large language model (LLM) training. LaC employs reinforcement learning to dynamically steer model parameters toward a critical point, where scale-invariant exploration and power-law path distributions emergently induce a “critical thinking mode” with enhanced rule abstraction. Integrated with Concept Network modeling (CoNet) and symbolic perturbation techniques—e.g., Matsubara summation—LaC enables an 8B-parameter LLM to solve unseen high-order quantum field theory problems from only a few examples, outperforming larger models. Moreover, it achieves zero-shot generalization on abstract tasks such as base-7, 7-digit addition. These results substantially advance few-shot symbolic reasoning, overcoming longstanding bottlenecks in physics-informed AI.
📝 Abstract
Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles. While artificial intelligence (AI) offers promise, its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers. We introduce learning at criticality (LaC), a reinforcement learning (RL) scheme that tunes Large Language Models (LLMs) to a sharp learning transition, addressing this information scarcity. At this transition, LLMs achieve peak generalization from minimal data, exemplified by 7-digit base-7 addition -- a test of nontrivial arithmetic reasoning. To elucidate this peak, we analyze a minimal concept-network model (CoNet) designed to capture the essence of how LLMs might link tokens. Trained on a single exemplar, this model also undergoes a sharp learning transition. This transition exhibits hallmarks of a second-order phase transition, notably power-law distributed solution path lengths. At this critical point, the system maximizes a ``critical thinking pattern"crucial for generalization, enabled by the underlying scale-free exploration. This suggests LLMs reach peak performance by operating at criticality, where such explorative dynamics enable the extraction of underlying operational rules. We demonstrate LaC in quantum field theory: an 8B-parameter LLM, tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums, solves unseen, higher-order problems, significantly outperforming far larger models. LaC thus leverages critical phenomena, a physical principle, to empower AI for complex, data-sparse challenges in fundamental physics.